Search results for: school based support Program (SBSP)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 34966

Search results for: school based support Program (SBSP)

646 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling

Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci

Abstract:

Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.

Keywords: land use, spatial resolution, WRF-Chem, air quality assessment

Procedia PDF Downloads 134
645 Synthesis and Characterization of Fibrin/Polyethylene Glycol-Based Interpenetrating Polymer Networks for Dermal Tissue Engineering

Authors: O. Gsib, U. Peirera, C. Egles, S. A. Bencherif

Abstract:

In skin regenerative medicine, one of the critical issues is to produce a three-dimensional scaffold with optimized porosity for dermal fibroblast infiltration and neovascularization, which exhibits high mechanical properties and displays sufficient wound healing characteristics. In this study, we report on the synthesis and characterization of macroporous sequential interpenetrating polymer networks (IPNs) combining skin wound healing properties of fibrin with the excellent physical properties of polyethylene glycol (PEG). Fibrin fibers serve as a provisional biologically active network to promote cell adhesion and proliferation while PEG provides the mechanical stability to maintain the entire 3D construct. After having modified both PEG and Serum Albumin (used for promoting enzymatic degradability) by adding methacrylate residues (PEGDM and SAM, respectively), Fibrin/PEGDM-SAM sequential IPNs were synthesized as follows: Macroporous sponges were first produced from PEGDM-SAM hydrogels by a freeze-drying technique and then rehydrated by adding the fibrin precursors. Environmental Scanning Electron Microscopy (ESEM) and Confocal Laser Scanning Microscopy (CLSM) were used to characterize their microstructure. Human dermal fibroblasts were cultivated during one week in the constructs and different cell culture parameters (viability, morphology, proliferation) were evaluated. Subcutaneous implantations of the scaffolds were conducted on five-week old male nude mice to investigate their biocompatibility in vivo. We successfully synthesized interconnected and macroporous Fibrin/PEGDM-SAM sequential IPNs. The viability of primary dermal fibroblasts was well maintained (above 90%) after 2 days of culture. Cells were able to adhere, spread and proliferate in the scaffolds suggesting the suitable porosity and intrinsic biologic properties of the constructs. The fibrin network adopted a spider web shape that covered partially the pores allowing easier cell infiltration into the macroporous structure. To further characterize the in vitro cell behavior, cell proliferation (EdU incorporation, MTS assay) is being studied. Preliminary histological analysis of animal studies indicated the persistence of hydrogels even after one-month post implantation and confirmed the absence of inflammation response, good biocompatibility and biointegration of our scaffolds within the surrounding tissues. These results suggest that our Fibrin/PEGDM-SAM IPNs could be considered as potential candidates for dermis regenerative medicine. Histological analysis will be completed to further assess scaffold remodeling including de novo extracellular matrix protein synthesis and early stage angiogenesis analysis. Compression measurements will be conducted to investigate the mechanical properties.

Keywords: fibrin, hydrogels for dermal reconstruction, polyethylene glycol, semi-interpenetrating polymer network

Procedia PDF Downloads 209
644 Disaster Management Approach for Planning an Early Response to Earthquakes in Urban Areas

Authors: Luis Reynaldo Mota-Santiago, Angélica Lozano

Abstract:

Determining appropriate measures to face earthquakesarea challenge for practitioners. In the literature, some analyses consider disaster scenarios, disregarding some important field characteristics. Sometimes, software that allows estimating the number of victims and infrastructure damages is used. Other times historical information of previous events is used, or the scenarios’informationis assumed to be available even if it isnot usual in practice. Humanitarian operations start immediately after an earthquake strikes, and the first hours in relief efforts are important; local efforts are critical to assess the situation and deliver relief supplies to the victims. A preparation action is prepositioning stockpiles, most of them at central warehouses placed away from damage-prone areas, which requires large size facilities and budget. Usually, decisions in the first 12 hours (standard relief time (SRT)) after the disaster are the location of temporary depots and the design of distribution paths. The motivation for this research was the delay in the reaction time of the early relief efforts generating the late arrival of aid to some areas after the Mexico City 7.1 magnitude earthquake in 2017. Hence, a preparation approach for planning the immediate response to earthquake disasters is proposed, intended for local governments, considering their capabilities for planning and for responding during the SRT, in order to reduce the start-up time of immediate response operations in urban areas. The first steps are the generation and analysis of disaster scenarios, which allow estimatethe relief demand before and in the early hours after an earthquake. The scenarios can be based on historical data and/or the seismic hazard analysis of an Atlas of Natural Hazards and Risk as a way to address the limited or null available information.The following steps include the decision processes for: a) locating local depots (places to prepositioning stockpiles)and aid-giving facilities at closer places as possible to risk areas; and b) designing the vehicle paths for aid distribution (from local depots to the aid-giving facilities), which can be used at the beginning of the response actions. This approach allows speeding up the delivery of aid in the early moments of the emergency, which could reduce the suffering of the victims allowing additional time to integrate a broader and more streamlined response (according to new information)from national and international organizations into these efforts. The proposed approachis applied to two case studies in Mexico City. These areas were affectedby the 2017’s earthquake, having limited aid response. The approach generates disaster scenarios in an easy way and plans a faster early response with a short quantity of stockpiles which can be managed in the early hours of the emergency by local governments. Considering long-term storage, the estimated quantities of stockpiles require a limited budget to maintain and a small storage space. These stockpiles are useful also to address a different kind of emergencies in the area.

Keywords: disaster logistics, early response, generation of disaster scenarios, preparation phase

Procedia PDF Downloads 96
643 Techno-Economic Assessments of Promising Chemicals from a Sugar Mill Based Biorefinery

Authors: Kathleen Frances Haigh, Mieke Nieder-Heitmann, Somayeh Farzad, Mohsen Ali Mandegari, Johann Ferdinand Gorgens

Abstract:

Lignocellulose can be converted to a range of biochemicals and biofuels. Where this is derived from agricultural waste, issues of competition with food are virtually eliminated. One such source of lignocellulose is the South African sugar industry. Lignocellulose could be accessed by changes to the current farming practices and investments in more efficient boilers. The South African sugar industry is struggling due to falling sugar prices and increasing costs and it is proposed that annexing a biorefinery to a sugar mill will broaden the product range and improve viability. Process simulations of the selected chemicals were generated using Aspen Plus®. It was envisaged that a biorefinery would be annexed to a typical South African sugar mill. Bagasse would be diverted from the existing boilers to the biorefinery and mixed with harvest residues. This biomass would provide the feedstock for the biorefinery and the process energy for the biorefinery and sugar mill. Thus, in all scenarios a portion of the biomass was diverted to a new efficient combined heat and power plant (CHP). The Aspen Plus® simulations provided the mass and energy balance data to carry out an economic assessment of each scenarios. The net present value (NPV), internal rate of return (IRR) and minimum selling price (MSP) was calculated for each scenario. As a starting point scenarios were generated to investigate the production of ethanol, ethanol and lactic acid, ethanol and furfural, butanol, methanol, and Fischer-Tropsch syncrude. The bypass to the CHP plant is a useful indicator of the energy demands of the chemical processes. An iterative approach was used to identify a suitable bypass because increasing this value had the combined effect of increasing the amount of energy available and reducing the capacity of the chemical plant. Bypass values ranged from 30% for syncrude production to 50% for combined ethanol and furfural production. A hurdle rate of 15.7% was selected for the IRR. The butanol, combined ethanol and furfural, or the Fischer-Tropsch syncrude scenarios are unsuitable for investment with IRRs of 4.8%, 7.5% and 11.5% respectively. This provides valuable insights into research opportunities. For example furfural from sugarcane bagasse is an established process although the integration of furfural production with ethanol is less well understood. The IRR for the ethanol scenario was 14.7%, which is below the investment criteria, but given the technological maturity it may still be considered for investment. The scenarios which met the investment criteria were the combined ethanol and lactic acid, and the methanol scenarios with IRRs of 20.5% and 16.7%, respectively. These assessments show that the production of biochemicals from lignocellulose can be commercially viable. In addition, this assessment have provided valuable insights for research to improve the commercial viability of additional chemicals and scenarios. This has led to further assessments of the production of itaconic acid, succinic acid, citric acid, xylitol, polyhydroxybutyrate, polyethylene, glucaric acid and glutamic acid.

Keywords: biorefineries, sugar mill, methanol, ethanol

Procedia PDF Downloads 167
642 Proposals for the Practical Implementation of the Biological Monitoring of Occupational Exposure for Antineoplastic Drugs

Authors: Mireille Canal-Raffin, Nadege Lepage, Antoine Villa

Abstract:

Context: Most antineoplastic drugs (AD) have a potential carcinogenic, mutagenic and/or reprotoxic effect and are classified as 'hazardous to handle' by National Institute for Occupational Safety and Health Their handling increases with the increase of cancer incidence. AD contamination from workers who handle AD and/or care for treated patients is, therefore, a major concern for occupational physicians. As part of the process of evaluation and prevention of chemical risks for professionals exposed to AD, Biological Monitoring of Occupational Exposure (BMOE) is the tool of choice. BMOE allows identification of at-risk groups, monitoring of exposures, assessment of poorly controlled exposures and the effectiveness and/or wearing of protective equipment, and documenting occupational exposure incidents to AD. This work aims to make proposals for the practical implementation of the BMOE for AD. The proposed strategy is based on the French good practice recommendations for BMOE, issued in 2016 by 3 French learned societies. These recommendations have been adapted to occupational exposure to AD. Results: AD contamination of professionals is a sensitive topic, and the BMOE requires the establishment of a working group and information meetings within the concerned health establishment to explain the approach, objectives, and purpose of monitoring. Occupational exposure to AD is often discontinuous and 2 steps are essential upstream: a study of the nature and frequency of AD used to select the Biological Exposure Indice(s) (BEI) most representative of the activity; a study of AD path in the institution to target exposed professionals and to adapt medico-professional information sheet (MPIS). The MPIS is essential to gather the necessary elements for results interpretation. Currently, 28 urinary specific BEIs of AD exposure have been identified, and corresponding analytical methods have been published: 11 BEIs were AD metabolites, and 17 were AD. Results interpretation is performed by groups of homogeneous exposure (GHE). There is no threshold biological limit value of interpretation. Contamination is established when an AD is detected in trace concentration or in a urine concentration equal or greater than the limit of quantification (LOQ) of the analytical method. Results can only be compared to LOQs of these methods, which must be as low as possible. For 8 of the 17 AD BEIs, the LOQ is very low with values between 0.01 to 0.05µg/l. For the other BEIs, the LOQ values were higher between 0.1 to 30µg/l. Results restitution by occupational physicians to workers should be individual and collective. Faced with AD dangerousness, in cases of workers contamination, it is necessary to put in place corrective measures. In addition, the implementation of prevention and awareness measures for those exposed to this risk is a priority. Conclusion: This work is a help for occupational physicians engaging in a process of prevention of occupational risks related to AD exposure. With the current analytical tools, effective and available, the (BMOE) to the AD should now be possible to develop in routine occupational physician practice. The BMOE may be complemented by surface sampling to determine workers' contamination modalities.

Keywords: antineoplastic drugs, urine, occupational exposure, biological monitoring of occupational exposure, biological exposure indice

Procedia PDF Downloads 106
641 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam

Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck

Abstract:

The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.

Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam

Procedia PDF Downloads 220
640 Double Liposomes Based Dual Drug Delivery System for Effective Eradication of Helicobacter pylori

Authors: Yuvraj Singh Dangi, Brajesh Kumar Tiwari, Ashok Kumar Jain, Kamta Prasad Namdeo

Abstract:

The potential use of liposomes as drug carriers by i.v. injection is limited by their low stability in blood stream. Firstly, phospholipid exchange and transfer to lipoproteins, mainly HDL destabilizes and disintegrates liposomes with subsequent loss of content. To avoid the pain associated with injection and to obtain better patient compliance studies concerning various dosage forms, have been developed. Conventional liposomes (unilamellar and multilamellar) have certain drawbacks like low entrapment efficiency, stability and release of drug after single breach in external membrane, have led to the new type of liposomal systems. The challenge has been successfully met in the form of Double Liposomes (DL). DL is a recently developed type of liposome, consisting of smaller liposomes enveloped in lipid bilayers. The outer lipid layer of DL can protect inner liposomes against various enzymes, therefore DL was thought to be more effective than ordinary liposomes. This concept was also supported by in vitro release characteristics i.e. DL formation inhibited the release of drugs encapsulated in inner liposomes. DL consists of several small liposomes encapsulated in large liposomes, i.e., multivesicular vesicles (MVV), therefore, DL should be discriminated from ordinary classification of multilamellar vesicles (MLV), large unilamellar vesicles (LUV), small unilamellar vesicles (SUV). However, for these liposomes, the volume of inner phase is small and loading volume of water-soluble drugs is low. In the present study, the potential of phosphatidylethanolamine (PE) lipid anchored double liposomes (DL) to incorporate two drugs in a single system is exploited as a tool to augment the H. pylori eradication rate. Preparation of DL involves two steps, first formation of primary (inner) liposomes by thin film hydration method containing one drug, then addition of suspension of inner liposomes on thin film of lipid containing the other drug. The success of formation of DL was characterized by optical and transmission electron microscopy. Quantitation of DL-bacterial interaction was evaluated in terms of percent growth inhibition (%GI) on reference strain of H. pylori ATCC 26695. To confirm specific binding efficacy of DL to H. pylori PE surface receptor we performed an agglutination assay. Agglutination in DL treated H. pylori suspension suggested selectivity of DL towards the PE surface receptor of H. pylori. Monotherapy is generally not recommended for treatment of a H. pylori infection due to the danger of development of resistance and unacceptably low eradication rates. Therefore, combination therapy with amoxicillin trihydrate (AMOX) as anti-H. pylori agent and ranitidine bismuth citrate (RBC) as antisecretory agent were selected for the study with an expectation that this dual-drug delivery approach will exert acceptable anti-H. pylori activity.

Keywords: Helicobacter pylorI, amoxicillin trihydrate, Ranitidine Bismuth citrate, phosphatidylethanolamine, multi vesicular systems

Procedia PDF Downloads 179
639 Seasonal Variability of Picoeukaryotes Community Structure Under Coastal Environmental Disturbances

Authors: Benjamin Glasner, Carlos Henriquez, Fernando Alfaro, Nicole Trefault, Santiago Andrade, Rodrigo De La Iglesia

Abstract:

A central question in ecology refers to the relative importance that local-scale variables have over community composition, when compared with regional-scale variables. In coastal environments, strong seasonal abiotic influence dominates these systems, weakening the impact of other parameters like micronutrients. After the industrial revolution, micronutrients like trace metals have increased in ocean as pollutants, with strong effects upon biotic entities and biological processes in coastal regions. Coastal picoplankton communities had been characterized as a cyanobacterial dominated fraction, but in recent years the eukaryotic component of this size fraction has gained relevance due to their high influence in carbon cycle, although, diversity patterns and responses to disturbances are poorly understood. South Pacific upwelling coastal environments represent an excellent model to study seasonal changes due to a strong influence in the availability of macro- and micronutrients between seasons. In addition, some well constrained coastal bays of this region have been subjected to strong disturbances due to trace metal inputs. In this study, we aim to compare the influence of seasonality and trace metals concentrations, on the community structure of planktonic picoeukaryotes. To describe seasonal patterns in the study area, satellite data in a 6 years time series and in-situ measurements with a traditional oceanographic approach such as CTDO equipment were performed. In addition, trace metal concentrations were analyzed trough ICP-MS analysis, for the same region. For biological data collection, field campaigns were performed in 2011-2012 and the picoplankton community was described by flow cytometry and taxonomical characterization with next-generation sequencing of ribosomal genes. The relation between the abiotic and biotic components was finally determined by multivariate statistical analysis. Our data show strong seasonal fluctuations in abiotic parameters such as photosynthetic active radiation and superficial sea temperature, with a clear differentiation of seasons. However, trace metal analysis allows identifying strong differentiation within the study area, dividing it into two zones based on trace metals concentration. Biological data indicate that there are no major changes in diversity but a significant fluctuation in evenness and community structure. These changes are related mainly with regional parameters, like temperature, but by analyzing the metal influence in picoplankton community structure, we identify a differential response of some plankton taxa to metal pollution. We propose that some picoeukaryotic plankton groups respond differentially to metal inputs, by changing their nutritional status and/or requirements under disturbances as a derived outcome of toxic effects and tolerance.

Keywords: Picoeukaryotes, plankton communities, trace metals, seasonal patterns

Procedia PDF Downloads 142
638 Microplastic Concentrations in Cultured Oyster in Two Bays of Baja California, Mexico

Authors: Eduardo Antonio Lozano Hernandez, Nancy Ramirez Alvarez, Lorena Margarita Rios Mendoza, Jose Vinicio Macias Zamora, Felix Augusto Hernandez Guzman, Jose Luis Sanchez Osorio

Abstract:

Microplastics (MPs) are one of the most numerous reported wastes found in the marine ecosystem, representing one of the greatest risks for organisms that inhabit that environment due to their bioavailability. Such is the case of bivalve mollusks, since they are capable of filtering large volumes of water, which increases the risk of contamination by microplastics through the continuous exposure to these materials. This study aims to determine, quantify and characterize microplastics found in the cultured oyster Crassostrea gigas. We also analyzed if there are spatio-temporal differences in the microplastic concentration of organisms grown in two bays having quite different human population. In addition, we wanted to have an idea of the possible impact on humans via consumption of these organisms. Commercial size organisms (>6cm length; n = 15) were collected by triplicate from eight oyster farming sites in Baja California, Mexico during winter and summer. Two sites are located in Todos Santos Bay (TSB), while the other six are located in San Quintin Bay (SQB). Site selection was based on commercial concessions for oyster farming in each bay. The organisms were chemically digested with 30% KOH (w/v) and 30% H₂O₂ (v/v) to remove the organic matter and subsequently filtered using a GF/D filter. All particles considered as possible MPs were quantified according to their physical characteristics using a stereoscopic microscope. The type of synthetic polymer was determined using a FTIR-ATR microscope and using a user as well as a commercial reference library (Nicolet iN10 Thermo Scientific, Inc.) of IR spectra of plastic polymers (with a certainty ≥70% for polymers pure; ≥50% for composite polymers). Plastic microfibers were found in all the samples analyzed. However, a low incidence of MP fragments was observed in our study (approximately 9%). The synthetic polymers identified were mainly polyester and polyacrylonitrile. In addition, polyethylene, polypropylene, polystyrene, nylon, and T. elastomer. On average, the content of microplastics in organisms were higher in TSB (0.05 ± 0.01 plastic particles (pp)/g of wet weight) than found in SQB (0.02 ± 0.004 pp/g of wet weight) in the winter period. The highest concentration of MPs found in TSB coincides with the rainy season in the region, which increases the runoff from streams and wastewater discharges to the bay, as well as the larger population pressure (> 500,000 inhabitants). Otherwise, SQB is a mainly rural location, where surface runoff from streams is minimal and in addition, does not have a wastewater discharge into the bay. During the summer, no significant differences (Manne-Whitney U test; P=0.484) were observed in the concentration of MPs found in the cultured oysters of TSB and SQB, (average: 0.01 ± 0.003 pp/g and 0.01 ± 0.002 pp/g, respectively). Finally, we concluded that the consumption of oyster does not represent a risk for humans due to the low concentrations of MPs found. The concentration of MPs is influenced by the variables such as temporality, circulations dynamics of the bay and existing demographic pressure.

Keywords: FTIR-ATR, Human risk, Microplastic, Oyster

Procedia PDF Downloads 150
637 Assessing the Socio-Economic Problems and Environmental Implications of Green Revolution In Uttar Pradesh, India

Authors: Naima Umar

Abstract:

Mid-1960’s has been landmark in the history of Indian agriculture. It was in 1966-67 when a New Agricultural Strategy was put into practice to tide over chronic shortages of food grains in the country. This strategy adopted was the use High-Yielding Varieties (HYV) of seeds (wheat and rice), which was popularly known as the Green Revolution. This phase of agricultural development has saved us from hunger and starvation and made the peasants more confident than ever before, but it has also created a number of socio-economic and environmental implications such as the reduction in area under forest, salinization, waterlogging, soil erosion, lowering of underground water table, soil, water and air pollution, decline in soil fertility, silting of rivers and emergence of several diseases and health hazards. The state of Uttar Pradesh in the north is bounded by the country of Nepal, the states of Uttrakhand on the northwest, Haryana on the west, Rajasthan on the southwest, Madhya Pradesh on the south and southwest, and Bihar on the east. It is situated between 23052´N and 31028´N latitudes and 7703´ and 84039´E longitudes. It is the fifth largest state of the country in terms of area, and first in terms of population. Forming the part of Ganga plain the state is crossed by a number of rivers which originate from the snowy peaks of Himalayas. The fertile plain of the Ganga has led to a high concentration of population with high density and the dominance of agriculture as an economic activity. Present paper highlights the negative impact of new agricultural technology on health of the people and environment and will attempt to find out factors which are responsible for these implications. Karl Pearson’s Correlation coefficient technique has been applied by selecting 1 dependent variable (i.e. Productivity Index) and some independent variables which may impact crop productivity in the districts of the state. These variables have categorized as: X1 (Cropping Intensity), X2 (Net irrigated area), X3 (Canal Irrigated area), X4 (Tube-well Irrigated area), X5 (Irrigated area by other sources), X6 (Consumption of chemical fertilizers (NPK) Kg. /ha.), X7 (Number of wooden plough), X8 (Number of iron plough), X9 (Number of harrows and cultivators), X10 (Number of thresher machines), X11(Number of sprayers), X12 (Number of sowing instruments), X13 (Number of tractors) and X14 (Consumption of insecticides and pesticides (in Kg. /000 ha.). The entire data during 2001-2005 and 2006- 2010 have been taken and 5 years average value is taken into consideration, based on secondary sources obtained from various government, organizations, master plan report, economic abstracts, district census handbooks and village and town directories etc,. put on a standard computer programmed SPSS and the results obtained have been properly tabulated.

Keywords: agricultural technology, environmental implications, health hazards, socio-economic problems

Procedia PDF Downloads 282
636 Two-wavelength High-energy Cr:LiCaAlF6 MOPA Laser System for Medical Multispectral Optoacoustic Tomography

Authors: Radik D. Aglyamov, Alexander K. Naumov, Alexey A. Shavelev, Oleg A. Morozov, Arsenij D. Shishkin, Yury P.Brodnikovsky, Alexander A.Karabutov, Alexander A. Oraevsky, Vadim V. Semashko

Abstract:

The development of medical optoacoustic tomography with the using human blood as endogenic contrast agent is constrained by the lack of reliable, easy-to-use and inexpensive sources of high-power pulsed laser radiation in the spectral region of 750-900 nm [1-2]. Currently used titanium-sapphire, alexandrite lasers or optical parametric light oscillators do not provide the required and stable output characteristics, they are structurally complex, and their cost is up to half the price of diagnostic optoacoustic systems. Here we are developing the lasers based on Cr:LiCaAlF6 crystals which are free of abovementioned disadvantages and provides intensive ten’s ns-range tunable laser radiation at specific absorption bands of oxy- (~840 nm) and -deoxyhemoglobin (~757 nm) in the blood. Cr:LiCAF (с=3 at.%) crystals were grown in Kazan Federal University by the vertical directional crystallization (Bridgman technique) in graphite crucibles in a fluorinating atmosphere at argon overpressure (P=1500 hPa) [3]. The laser elements have cylinder shape with the diameter of 8 mm and 90 mm in length. The direction of the optical axis of the crystal was normal to the cylinder generatrix, which provides the π-polarized laser action correspondent to maximal stimulated emission cross-section. The flat working surfaces of the active elements were polished and parallel to each other with an error less than 10”. No any antireflection coating was applied. The Q-switched master oscillator-power amplifiers laser system (MOPA) with the dual-Xenon flashlamp pumping scheme in diffuse-reflectivity close-coupled head were realized. A specially designed laser cavity, consisting of dielectric highly reflective reflectors with a 2 m-curvature radius, a flat output mirror, a polarizer and Q-switch sell, makes it possible to operate sequentially in a circle (50 ns - laser one pulse after another) at wavelengths of 757 and 840 nm. The programmable pumping system from Tomowave Laser LLC (Russia) provided independent to each pulses (up to 250 J at 180 μs) pumping to equalize the laser radiation intensity at these wavelengths. The MOPA laser operates at 10 Hz pulse repetition rate with the output energy up to 210 mJ. Taking into account the limitations associated with physiological movements and other characteristics of patient tissues, the duration of laser pulses and their energy allows molecular and functional high-contrast imaging to depths of 5-6 cm with a spatial resolution of at least 1 mm. Highly likely the further comprehensive design of laser allows improving the output properties and realizing better spatial resolution of medical multispectral optoacoustic tomography systems.

Keywords: medical optoacoustic, endogenic contrast agent, multiwavelength tunable pulse lasers, MOPA laser system

Procedia PDF Downloads 71
635 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem

Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly

Abstract:

We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.

Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard

Procedia PDF Downloads 494
634 Mesenchymal Stem Cells on Fibrin Assemblies with Growth Factors

Authors: Elena Filova, Ondrej Kaplan, Marie Markova, Helena Dragounova, Roman Matejka, Eduard Brynda, Lucie Bacakova

Abstract:

Decellularized vessels have been evaluated as small-diameter vascular prostheses. Reseeding autologous cells onto decellularized tissue prior implantation should prolong prostheses function and make them living tissues. Suitable cell types for reseeding are both endothelial cells and bone marrow-derived stem cells, with a capacity for differentiation into smooth muscle cells upon mechanical loading. Endothelial cells assure antithrombogenicity of the vessels and MSCs produce growth factors and, after their differentiation into smooth muscle cells, they are contractile and produce extracellular matrix proteins as well. Fibrin is a natural scaffold, which allows direct cell adhesion based on integrin receptors. It can be prepared autologous. Fibrin can be modified with bound growth factors, such as basic fibroblast growth factor (FGF-2) and vascular endothelial growth factor (VEGF). These modifications in turn make the scaffold more attractive for cells ingrowth into the biological scaffold. The aim of the study was to prepare thin surface-attached fibrin assemblies with bound FGF-2 and VEGF, and to evaluate growth and differentiation of bone marrow-derived mesenchymal stem cells on the fibrin (Fb) assemblies. Following thin surface-attached fibrin assemblies were prepared: Fb, Fb+VEGF, Fb+FGF2, Fb+heparin, Fb+heparin+VEGF, Fb+heparin+FGF2, Fb+heparin+FGF2+VEGF. Cell culture poly-styrene and glass coverslips were used as controls. Human MSCs (passage 3) were seeded at the density of 8800 cells/1.5 mL alpha-MEM medium with 2.5% FS and 200 U/mL aprotinin per well of a 24-well cell culture. The cells have been cultured on the samples for 6 days. Cell densities on day 1, 3, and 6 were analyzed after staining with LIVE/DEAD cytotoxicity/viability assay kit. The differentiation of MSCs is being analyzed using qPCR. On day 1, the highest density of MSCs was observed on Fb+VEGF and Fb+FGF2. On days 3 and 6, there were similar densities on all samples. On day 1, cell morphology was polygonal and spread on all sample. On day 3 and 6, MSCs growing on Fb assemblies with FGF2 became apparently elongated. The evaluation of expression of genes for von Willebrand factor and CD31 (endothelial cells), for alpha-actin (smooth muscle cells), and for alkaline phosphatase (osteoblasts) is in progress. We prepared fibrin assemblies with bound VEGF and FGF-2 that supported attachment and growth of mesenchymal stem cells. The layers are promising for improving the ingrowth of MSCs into the biological scaffold. Supported by the Technology Agency of the Czech Republic TA04011345, and Ministry of Health NT11270-4/2010, and BIOCEV – Biotechnology and Biomedicine Centre of the Academy of Sciences and Charles University” project (CZ.1.05/1.1.00/02.0109), funded by the European Regional Development Fund for their financial supports.

Keywords: fibrin assemblies, FGF-2, mesenchymal stem cells, VEGF

Procedia PDF Downloads 301
633 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 111
632 Surface Roughness in the Incremental Forming of Drawing Quality Cold Rolled CR2 Steel Sheet

Authors: Zeradam Yeshiwas, A. Krishnaia

Abstract:

The aim of this study is to verify the resulting surface roughness of parts formed by the Single-Point Incremental Forming (SPIF) process for an ISO 3574 Drawing Quality Cold Rolled CR2 Steel. The chemical composition of drawing quality Cold Rolled CR2 steel is comprised of 0.12 percent of carbon, 0.5 percent of manganese, 0.035 percent of sulfur, 0.04 percent phosphorous, and the remaining percentage is iron with negligible impurities. The experiments were performed on a 3-axis vertical CNC milling machining center equipped with a tool setup comprising a fixture and forming tools specifically designed and fabricated for the process. The CNC milling machine was used to transfer the tool path code generated in Mastercam 2017 environment into three-dimensional motions by the linear incremental progress of the spindle. The blanks of Drawing Quality Cold Rolled CR2 steel sheets of 1 mm of thickness have been fixed along their periphery by a fixture and hardened high-speed steel (HSS) tools with a hemispherical tip of 8, 10 and 12mm of diameter were employed to fabricate sample parts. To investigate the surface roughness, hyperbolic-cone shape specimens were fabricated based on the chosen experimental design. The effect of process parameters on the surface roughness was studied using three important process parameters, i.e., tool diameter, feed rate, and step depth. In this study, the Taylor-Hobson Surtronic 3+ surface roughness tester profilometer was used to determine the surface roughness of the parts fabricated using the arithmetic mean deviation (Rₐ). In this instrument, a small tip is dragged across a surface while its deflection is recorded. Finally, the optimum process parameters and the main factor affecting surface roughness were found using the Taguchi design of the experiment and ANOVA. A Taguchi experiment design with three factors and three levels for each factor, the standard orthogonal array L9 (3³) was selected for the study using the array selection table. The lowest value of surface roughness is significant for surface roughness improvement. For this objective, the ‘‘smaller-the-better’’ equation was used for the calculation of the S/N ratio. The finishing roughness parameter Ra has been measured for the different process combinations. The arithmetic means deviation (Rₐ) was measured via the experimental design for each combination of the control factors by using Taguchi experimental design. Four roughness measurements were taken for a single component and the average roughness was taken to optimize the surface roughness. The lowest value of Rₐ is very important for surface roughness improvement. For this reason, the ‘‘smaller-the-better’’ Equation was used for the calculation of the S/N ratio. Analysis of the effect of each control factor on the surface roughness was performed with a ‘‘S/N response table’’. Optimum surface roughness was obtained at a feed rate of 1500 mm/min, with a tool radius of 12 mm, and with a step depth of 0.5 mm. The ANOVA result shows that step depth is an essential factor affecting surface roughness (91.1 %).

Keywords: incremental forming, SPIF, drawing quality steel, surface roughness, roughness behavior

Procedia PDF Downloads 40
631 Making Haste Slowly: South Africa's Transition from a Medical to a Social Model regarding Persons with Disabilities

Authors: Leoni Van Der Merwe

Abstract:

Historically, in South Africa, disability has been viewed as a dilemma of the individual. The discourse surrounding the definition of disability and applicable theories are as fluid as the differing needs of persons with disabilities within society. In 1997, the Office of the Deputy President published the White Paper on the Integrated National Disability Strategy (WPINDS) which sought to integrate disability issues in all governmental development strategies, planning and programs as well as to solidify the South African government’s stance that disability was to be considered according to the social model and not the, previously utilized, medical model of disability. The models of disability are conceptual frameworks for understanding disability and can provide some insight into why certain attitudes exist and how they are reinforced in society. Although the WPINDS was regarded as a critical milestone in the history of the disability rights struggle in South Africa; it has taken approximately twenty years for the publication of a similar document taking into account South Africa’s changing social, economic, political and technological dispensation. December 2015 marked the approval of the White Paper on the Rights of Persons with Disabilities (WPRPD) which seeks to update the WPINDS, integrate principles contained in international law instruments and endorse a mainstreaming trajectory for realizing the rights of persons with disabilities. While the WPINDS and the WPRPD were published two decades apart, both documents contain an emphasis on a transition from the medical model to the social model. Whereas, the medical model presupposes that disability is mainly a health and welfare matter and is focused on an individualistic and dependency-based approach; the social model requires a paradigm shift in the manner in which disability is constructed so as to highlight the shortcomings of society in respect of disability and to bring to the fore the capabilities of persons with disabilities. The social model has led to unmatched success in changing the perceptions surrounding disability. This article seeks to investigate the progress made in the implementation of the social model in South Africa by taking into account the effect of the diverse political and cultural landscape in promoting the historically entrenched medical model and the rise of disability activism prior to the new democratic dispensation as well as legislation, case law, policy documents and barriers in respect of persons with disabilities that are pervasive in South African society. The research paper will conclude that although numerous interventions have been identified and implemented to promote the consideration of disability within a social construct in South Africa, such interventions require increased national and international collaboration, resources and pace to ensure that the efforts made lead to sustainable results. For persons with disabilities, what remains to be seen is whether the proliferation of activism by interest groups, social awareness as well as the development of policy documents, legislation and case law will serve as the impetus to dissipate the view that disability is burden to be carried solely on the shoulders of the person with the disability.

Keywords: disability, medical model, social model, societal barriers, South Africa

Procedia PDF Downloads 356
630 A Multipurpose Inertial Electrostatic Magnetic Confinement Fusion for Medical Isotopes Production

Authors: Yasser R. Shaban

Abstract:

A practical multipurpose device for medical isotopes production is most wanted for clinical centers and researches. Unfortunately, the major supply of these radioisotopes currently comes from aging sources, and there is a great deal of uneasiness in the domestic market. There are also many cases where the cost of certain radioisotopes is too high for their introduction on a commercial scale even though the isotopes might have great benefits for society. The medical isotopes such as radiotracers PET (Positron Emission Tomography), Technetium-99 m, and Iodine-131, Lutetium-177 by is feasible to be generated by a single unit named IEMC (Inertial Electrostatic Magnetic Confinement). The IEMC fusion vessel is the upgrading unit of the Inertial Electrostatic Confinement IEC fusion vessel. Comprehensive experimental works on IEC were carried earlier with promising results. The principle of inertial electrostatic magnetic confinement IEMC fusion is based on forcing the binary fuel ions to interact in the opposite directions in ions cyclotrons orbits with different kinetic energies in order to have equal compression (forces) and with different ion cyclotron frequency ω in order to increase the rate of intersection. The IEMC features greater fusion volume than IEC by several orders of magnitude. The particles rate from the IEMC approach are projected to be 8.5 x 10¹¹ (p/s), ~ 0.2 microampere proton, for D/He-3 fusion reaction and 4.2 x 10¹² (n/s) for D/T fusion reaction. The projected values of particles yield (neutrons and protons) are suitable for medical isotope productions on-site by a single unit without any change in the fusion vessel but only the fuel gas. The PET radiotracers are usually produced on-site by medical ion accelerator whereas Technetium-99m (Tc-99m) is usually produced off-site from the irradiation facilities of nuclear power plants. Typically, hospitals receive molybdenum-99 isotope container; the isotope decays to Tc-99mwith half-life time 2.75 days. Even though the projected current from IEMC is lesser than the proton current from the medical ion accelerator but still the IEMC vessel is simpler, and reduced in components and power consumption which add a new value of populating the PET radiotracers in most clinical centers. On the other hand, the projected neutrons flux from the IEMC is lesser than the thermal neutron flux at the irradiation facilities of nuclear power plants, but in the IEMC case the productions of Technetium-99m is suggested to be at the resonance region of which the resonance integral cross section is two orders of magnitude higher than the thermal flux. Thus it can be said the net activity from both is evened. Besides, the particle accelerator cannot be considered a multipurpose particles production unless a significant change is made to the accelerator to change from neutrons mode to protons mode or vice versa. In conclusion, the projected fusion yield from IEMC is a straightforward since slightly change in the primer IEC and ion source is required.

Keywords: electrostatic versus magnetic confinement fusion vessel, ion source, medical isotopes productions, neutron activation

Procedia PDF Downloads 326
629 Real-Time Neuroimaging for Rehabilitation of Stroke Patients

Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge

Abstract:

Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).

Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation

Procedia PDF Downloads 362
628 Residents' Incomes in Local Government Unit as the Major Determinant of Local Budget Transparency in Croatia: Panel Data Analysis

Authors: Katarina Ott, Velibor Mačkić, Mihaela Bronić, Branko Stanić

Abstract:

The determinants of national budget transparency have been widely discussed in the literature, while research on determinants of local budget transparency are scarce and empirically inconclusive, particularly in the new, fiscally centralised, EU member states. To fill the gap, we combine two strands of the literature: that concerned with public administration and public finance, shedding light on the economic and financial determinants of local budget transparency, and that on the political economy of transparency (principal agent theory), covering the relationships among politicians and between politicians and voters. Our main hypothesis states that variables describing residents’ capacity have a greater impact on local budget transparency than variables indicating the institutional capacity of local government units (LGUs). Additional subhypotheses test the impact of each variable analysed on local budget transparency. We address the determinants of local budget transparency in Croatia, measured by the number of key local budget documents published on the LGUs’ websites. By using a data set of 128 cities and 428 municipalities over the 2015-2017 period and by applying panel data analysis based on Poisson and negative binomial distribution, we test our main hypothesis and sub-hypotheses empirically. We measure different characteristics of institutional and residents’ capacity for each LGU. Age, education and ideology of the mayor/municipality head, political competition indicators, number of employees, current budget revenues and direct debt per capita have been used as a measure of the institutional capacity of LGU. Residents’ capacity in each LGU has been measured through the numbers of citizens and their average age as well as by average income per capita. The most important determinant of local budget transparency is average residents' income per capita at both city and municipality level. The results are in line with most previous research results in fiscally decentralised countries. In the context of a fiscally centralised country with numerous small LGUs, most of whom have low administrative and fiscal capacity, this has a theoretical rationale in the legitimacy and principal-agent theory (opportunistic motives of the incumbent). The result is robust and significant, but because of the various other results that change between city and municipality levels (e.g. ideology and political competition), there is a need for further research (both on identifying other determinates and/or methods of analysis). Since in Croatia the fiscal capacity of a LGU depends heavily on the income of its residents, units with higher per capita incomes in many cases have also higher budget revenues allowing them to engage more employees and resources. In addition, residents’ incomes might be also positively associated with local budget transparency because of higher citizen demand for such transparency. Residents with higher incomes expect more public services and have more access to and experience in using the Internet, and will thus typically demand more budget information on the LGUs’ websites.

Keywords: budget transparency, count data, Croatia, local government, political economy

Procedia PDF Downloads 154
627 Li2o Loss of Lithium Niobate Nanocrystals during High-Energy Ball-Milling

Authors: Laura Kocsor, Laszlo Peter, Laszlo Kovacs, Zsolt Kis

Abstract:

The aim of our research is to prepare rare-earth-doped lithium niobate (LiNbO3) nanocrystals, having only a few dopant ions in the focal point of an exciting laser beam. These samples will be used to achieve individual addressing of the dopant ions by light beams in a confocal microscope setup. One method for the preparation of nanocrystalline materials is to reduce the particle size by mechanical grinding. High-energy ball-milling was used in several works to produce nano lithium niobate. Previously, it was reported that dry high-energy ball-milling of lithium niobate in a shaker mill results in the partial reduction of the material, which leads to a balanced formation of bipolarons and polarons yielding gray color together with oxygen release and Li2O segregation on the open surfaces. In the present work we focus on preparing LiNbO3 nanocrystals by high-energy ball-milling using a Fritsch Pulverisette 7 planetary mill. Every ball-milling process was carried out in zirconia vial with zirconia balls of different sizes (from 3 mm to 0.1 mm), wet grinding with water, and the grinding time being less than an hour. Gradually decreasing the ball size to 0.1 mm, an average particle size of about 10 nm could be obtained determined by dynamic light scattering and verified by scanning electron microscopy. High-energy ball-milling resulted in sample darkening evidenced by optical absorption spectroscopy measurements indicating that the material underwent partial reduction. The unwanted lithium oxide loss decreases the Li/Nb ratio in the crystal, strongly influencing the spectroscopic properties of lithium niobate. Zirconia contamination was found in ground samples proved by energy-dispersive X-ray spectroscopy measurements; however, it cannot be explained based on the hardness properties of the materials involved in the ball-milling process. It can be understood taking into account the presence of lithium hydroxide formed the segregated lithium oxide and water during the ball-milling process, through chemically induced abrasion. The quantity of the segregated Li2O was measured by coulometric titration. During the wet milling process in the planetary mill, it was found that the lithium oxide loss increases linearly in the early phase of the milling process, then a saturation of the Li2O loss can be seen. This change goes along with the disappearance of the relatively large particles until a relatively narrow size distribution is achieved in accord with the dynamic light scattering measurements. With the 3 mm ball size and 1100 rpm rotation rate, the mean particle size achieved is 100 nm, and the total Li2O loss is about 1.2 wt.% of the original LiNbO3. Further investigations have been done to minimize the Li2O segregation during the ball-milling process. Since the Li2O loss was observed to increase with the growing total surface of the particles, the influence of ball-milling parameters on its quantity has also been studied.

Keywords: high-energy ball-milling, lithium niobate, mechanochemical reaction, nanocrystals

Procedia PDF Downloads 104
626 Indoor Air Pollution and Reduced Lung Function in Biomass Exposed Women: A Cross Sectional Study in Pune District, India

Authors: Rasmila Kawan, Sanjay Juvekar, Sandeep Salvi, Gufran Beig, Rainer Sauerborn

Abstract:

Background: Indoor air pollution especially from the use of biomass fuels, remains a potentially large global health threat. The inefficient use of such fuels in poorly ventilated conditions results in high levels of indoor air pollution, most seriously affecting women and young children. Objectives: The main aim of this study was to measure and compare the lung function of the women exposed in the biomass fuels and LPG fuels and relate it to the indoor emission measured using a structured questionnaire, spirometer and filter based low volume samplers respectively. Methodology: This cross-sectional comparative study was conducted among the women (aged > 18 years) living in rural villages of Pune district who were not diagnosed of chronic pulmonary diseases or any other respiratory diseases and using biomass fuels or LPG for cooking for a minimum period of 5 years or more. Data collection was done from April to June 2017 in dry season. Spirometer was performed using the portable, battery-operated ultrasound Easy One spirometer (Spiro bank II, NDD Medical Technologies, Zurich, Switzerland) to determine the lung function over Forced expiratory volume. The primary outcome variable was forced expiratory volume in 1 second (FEV1). Secondary outcome was chronic obstruction pulmonary disease (post bronchodilator FEV1/ Forced Vital Capacity (FVC) < 70%) as defined by the Global Initiative for Obstructive Lung Disease. Potential confounders such as age, height, weight, smoking history, occupation, educational status were considered. Results: Preliminary results showed that the lung function of the women using Biomass fuels (FEV1/FVC = 85% ± 5.13) had comparatively reduced lung function than the LPG users (FEV1/FVC = 86.40% ± 5.32). The mean PM 2.5 mass concentration in the biomass user’s kitchen was 274.34 ± 314.90 and 85.04 ± 97.82 in the LPG user’s kitchen. Black carbon amount was found higher in the biomass users (black carbon = 46.71 ± 46.59 µg/m³) than LPG users (black carbon=11.08 ± 22.97 µg/m³). Most of the houses used separate kitchen. Almost all the houses that used the clean fuel like LPG had minimum amount of the particulate matter 2.5 which might be due to the background pollution and cross ventilation from the houses using biomass fuels. Conclusions: Therefore, there is an urgent need to adopt various strategies to improve indoor air quality. There is a lacking of current state of climate active pollutants emission from different stove designs and identify major deficiencies that need to be tackled. Moreover, the advancement in research tools, measuring technique in particular, is critical for researchers in developing countries to improve their capability to study the emissions for addressing the growing climate change and public health concerns.

Keywords: black carbon, biomass fuels, indoor air pollution, lung function, particulate matter

Procedia PDF Downloads 147
625 Carbohydrate Intake and Physical Activity Levels Modify the Association between FTO Gene Variants and Obesity and Type 2 Diabetes: First Nutrigenetics Study in an Asian Indian Population

Authors: K. S. Vimal, D. Bodhini, K. Ramya, N. Lakshmipriya, R. M. Anjana, V. Sudha, J. A. Lovegrove, V. Mohan, V. Radha

Abstract:

Gene-lifestyle interaction studies have been carried out in various populations. However, to date there are no studies in an Asian Indian population. Hence, we examined whether lifestyle factors such as diet and physical activity modify the association between fat mass and obesity–associated (FTO) gene variants and obesity and type 2 diabetes (T2D) in an Asian Indian population. We studied 734 unrelated T2D and 884 normal glucose-tolerant (NGT) participants randomly selected from the Chennai Urban Rural Epidemiology Study (CURES) in Southern India. Obesity was defined according to the World Health Organization Asia Pacific Guidelines (non-obese, BMI < 25 kg/m2; obese, BMI ≥ 25 kg/m2). Six single nucleotide polymorphisms (SNPs) in the FTO gene (rs9940128, rs7193144, rs8050136, rs918031, rs1588413 and rs11076023) identified from recent genome-wide association studies for T2D were genotyped by polymerase chain reaction-restriction fragment length polymorphism and direct sequencing. Dietary assessment was carried out using a validated food frequency questionnaire and physical activity was based upon the self-report. Interaction analyses were performed by including the interaction terms in the model. A joint likelihood ratio test of the main SNP effects and the SNP-diet/physical activity interaction effects was used in the linear regression analyses to maximize statistical power. Statistical analyses were performed using STATA version 13. There was a significant interaction between FTO SNP rs8050136 and carbohydrate energy percentage (Pinteraction=0.04) on obesity, where the ‘A’ allele carriers of the SNP rs8050136 had 2.46 times higher risk of obesity than those with ‘CC’ genotype (P=3.0x10-5) among individuals in the highest tertile of carbohydrate energy percentage. Furthermore, among those who had lower levels of physical activity, the ‘A’ allele carriers of the SNP rs8050136 had 1.89 times higher risk of obesity than those with ‘CC’ genotype (P=4.0x10-5). We also found a borderline interaction between SNP rs11076023 and carbohydrate energy percentage (Pinteraction=0.08) on T2D, where the ‘A’ allele carriers in the highest tertile of carbohydrate energy percentage, had 1.57 times higher risk of T2D than those with ‘TT’ genotype (P=0.002). There was also a significant interaction between SNP rs11076023 and physical activity (Pinteraction=0.03) on T2D. No further significant interactions between SNPs and macronutrient intake or physical activity on obesity and T2D were observed. In conclusion, this is the first study to provide evidence for a gene-diet and gene-physical activity interaction on obesity and T2D in an Asian Indian population. These findings suggest that the association between FTO gene variants and obesity and T2D is influenced by carbohydrate intake and physical activity levels. Greater understanding of how FTO gene influences obesity and T2D through dietary and exercise interventions will advance the development of behavioral intervention and personalised lifestyle strategies predicted to reduce the development of metabolic diseases in ‘A’ allele carriers of both SNPs in this Asian Indian population.

Keywords: dietary intake, FTO, obesity, physical activity, type 2 diabetes, Asian Indian.

Procedia PDF Downloads 506
624 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 38
623 Caged Compounds as Light-Dependent Initiators for Enzyme Catalysis Reactions

Authors: Emma Castiglioni, Nigel Scrutton, Derren Heyes, Alistair Fielding

Abstract:

By using light as trigger, it is possible to study many biological processes, such as the activity of genes, proteins, and other molecules, with precise spatiotemporal control. Caged compounds, where biologically active molecules are generated from an inert precursor upon laser photolysis, offer the potential to initiate such biological reactions with high temporal resolution. As light acts as the trigger for cleaving the protecting group, the ‘caging’ technique provides a number of advantages as it can be intracellular, rapid and controlled in a quantitative manner. We are developing caging strategies to study the catalytic cycle of a number of enzyme systems, such as nitric oxide synthase and ethanolamine ammonia lyase. These include the use of caged substrates, caged electrons and the possibility of caging the enzyme itself. In addition, we are developing a novel freeze-quench instrument to study these reactions, which combines rapid mixing and flashing capabilities. Reaction intermediates will be trapped at low temperatures and will be analysed by using electron paramagnetic resonance (EPR) spectroscopy to identify the involvement of any radical species during catalysis. EPR techniques typically require relatively long measurement times and very often, low temperatures to fully characterise these short-lived species. Therefore, common rapid mixing techniques, such as stopped-flow or quench-flow are not directly suitable. However, the combination of rapid freeze-quench (RFQ) followed by EPR analysis provides the ideal approach to kinetically trap and spectroscopically characterise these transient radical species. In a typical RFQ experiment, two reagent solutions are delivered to the mixer via two syringes driven by a pneumatic actuator or stepper motor. The new mixed solution is then sprayed into a cryogenic liquid or surface, and the frozen sample is then collected and packed into an EPR tube for analysis. The earliest RFQ instrument consisted of a hydraulic ram unit as a drive unit with direct spraying of the sample into a cryogenic liquid (nitrogen, isopentane or petroleum). Improvements to the RFQ technique have arisen from the design of new mixers in order to reduce both the volume and the mixing time. In addition, the cryogenic isopentane bath has been coupled to a filtering system or replaced by spraying the solution onto a surface that is frozen via thermal conductivity with a cryogenic liquid. In our work, we are developing a novel RFQ instrument which combines the freeze-quench technology with flashing capabilities to enable the studies of both thermally-activated and light-activated biological reactions. This instrument also uses a new rotating plate design based on magnetic couplings and removes the need for mechanical motorised rotation, which can otherwise be problematic at cryogenic temperatures.

Keywords: caged compounds, freeze-quench apparatus, photolysis, radicals

Procedia PDF Downloads 190
622 How Whatsappization of the Chatbot Affects User Satisfaction, Trust, and Acceptance in a Drive-Sharing Task

Authors: Nirit Gavish, Rotem Halutz, Liad Neta

Abstract:

Nowadays, chatbots are gaining more and more attention due to the advent of large language models. One of the important considerations in chatbot design is how to create an interface to achieve high user satisfaction, trust, and acceptance. Since WhatsApp conversations sometimes substitute for face-to-face communication, we studied whether WhatsAppization of the chatbot -making the conversation resemble a WhatsApp conversation more- will improve user satisfaction, trust, and acceptance, or whether the opposite will occur due to the Uncanny Valley (UV) effect. The task was a drive-sharing task, in which participants communicated with a textual chatbot via WhatsApp and could decide whether to participate in a ride to college with a driver suggested by the chatbot. WhatsAppization of the chatbot was done in two ways: By a dialog-style conversation (Dialog versus No Dialog), and by adding WhatsApp indicators – “Last Seen”, “Connected”, “Read Receipts”, and “Typing…” (Indicators versus No Indicators). Our 120 participants were randomly assigned to one of the four 2 by 2 design groups, with 30 participants in each. They interacted with the WhatsApp chatbot and then filled out a questionnaire. The results demonstrated that, as expected from the manipulation, the interaction with the chatbot was longer for the dialog condition compared to the no dialog. This extra interaction, however, did not lead to higher acceptance -quite the opposite, since participants in the dialog condition were less willing to implement the decision made at the end of the conversation with the chatbot and continue the interaction with the driver they chose. The results are even more striking when considering the Indicators condition. Both for the satisfaction measures and the trust measures, participants’ ratings were lower in the Indicators condition compared to the No Indicators. Participants in the Indicators condition felt that the ride search process was harder to operate, and slower (even though the actual interaction time was similar). They were less convinced that the chatbot suggested real trips and they trusted the person offering the ride and referred to them by the chatbot less. These effects were more evident for participants who preferred to share their rides using WhatsApp compared to participants who preferred chatbots for that purpose. Considering our findings, we can say that the WhatsAppization of the chatbot was detrimental. This is true for the both chatbot WhatsAppization methods – by making the conversation more a dialog and adding WhatsApp indicators. For the chosen drive-sharing task, the results were, in addition to lower satisfaction, less trust in the chatbot’s suggestion and even in the driver suggested by the chatbot, and lower willingness to actually undertake the suggested ride. In addition, it seems that the most problematic WhatsAppization method was using WhatsApp’s indicators during the interaction with the chatbot. The current study suggests that a conversation with an artificial agent should also not imitate a WhatsApp conversation very closely. With the proliferation of WhatsApp use, the emotional and social aspect of face-to face commination are moving to WhatsApp communication. Based on the current study’s findings, it is possible that the UV effect also occurs in WhatsAppization, and not only in humanization, of the chatbot, with a similar feeling of eeriness, and is more pronounced for people who prefer to use WhatsApp over chatbots. The current research can serve as a starting point to study the very interesting and important topic of chatbots WhatsAppization. More methods of WhatsAppization and other tasks could be the focus of further studies.

Keywords: chatbot, WhatsApp, humanization, Uncanny Valley, drive sharing

Procedia PDF Downloads 20
621 We Have Never Seen a Dermatologist. Reaching the Unreachable Through Teledermatology

Authors: Innocent Atuhe, Babra Nalwadda, Grace Mulyowa Kitunzi, Annabella Haninka Ejiri

Abstract:

Background: Atopic Dermatitis (AD) is one of the most prevalent and growing chronic inflammatory skin diseases in African prisons. AD care is limited in African due to lack of information about the disease amongst primary care workers, limited access to dermatologists, lack of proper training of healthcare workers, and shortage of appropriate treatments. We designed and implemented the Prisons Telederma project based on the recommendations of the International Society of Atopic Dermatitis. Our overall goal was to increase access to dermatologist-led care for prisoners with AD through teledermatology in Uganda. We aimed to; i) to increase awareness and understanding of teledermatology among prison health workers; and ii) to improve treatment outcomes of prisoners with atopic dermatitis through increased access to and utilization of consultant dermatologists through teledermatology in Uganda prisons: Approach: We used Store-and-forward Teledermatology (SAF-TD) to increase access to dermatologist-led care for prisoners and prisons staff with AD. We conducted a five days training for prison health workers using an adapted WHO training guide on recognizing neglected tropical diseases through changes on the skin together with an adapted American Academy of Dermatology (AAD) Childhood AD Basic Dermatology Curriculum designed to help trainees develop a clinical approach to the evaluation and initial management of patients with AD. This training was followed by blended e-learning, webinars facilitated by consultant Dermatologists with local knowledge of medication and local practices, apps adjusted for pigmented skin, WhatsApp group discussions, and sharing pigmented skin AD pictures and treatment via zoom meetings. We hired a team of Ugandan Senior Consultant dermatologists to draft an iconographic atlas of the main dermatoses in pigmented African skin and shared this atlas with prison health staff for use as a job aid. We had planned to use MySkinSelfie mobile phone application to take and share skin pictures of prisoners with AD with Consultant Dermatologists, who would review the pictures and prescribe appropriate treatment. Unfortunately, the National Health Service withdrew the app from the market due to technical issues. We monitored and evaluated treatment outcomes using the Patient Oriented Eczema Measure (POEM) tool. We held four advocacy meetings to persuade relevant stakeholders to increase supplies and availability of first-line AD treatments such as emollients in prison health facilities. Results: Draft iconographic atlas of the main dermatoses in pigmented African skin Increased proportion of prison health staff with adequate knowledge of AD and teledermatology from 20% to 80% Increased proportion of prisoners with AD reporting improvement in disease severity (POEM scores) from 25% to 35% in one year. Increased proportion of prisoners with AD seen by consultant dermatologist through teledermatology from 0% to 20% in one year. Increased the availability of AD recommended treatments in prisons health facilities from 5% to 10% in one year

Keywords: teledermatology, prisoners, reaching, un-reachable

Procedia PDF Downloads 93
620 An Integrated Approach to Cultural Heritage Management in the Indian Context

Authors: T. Lakshmi Priya

Abstract:

With the widening definition of heritage, the challenges of heritage management has become more complex . Today heritage not only includes significant monuments but comprises historic areas / sites, historic cities, cultural landscapes, and living heritage sites. There is a need for a comprehensive understanding of the values associated with these heritage resources, which will enable their protection and management. These diverse cultural resources are managed by multiple agencies having their own way of operating in the heritage sites. An Integrated approach to management of these cultural resources ensures its sustainability for the future generation. This paper outlines the importance of an integrated approach for the management and protection of complex heritage sites in India by examining four case studies. The methodology for this study is based on secondary research and primary surveys conducted during the preparation of the conservation management plansfor the various sites. The primary survey included basic documentation, inventorying, and community surveys. Red Fort located in the city of Delhi is one of the most significant forts built in 1639 by the Mughal Emperor Shahjahan. This fort is a national icon and stands testimony to the various historical events . It is on the ramparts of Red Fort that the national flag was unfurled on 15th August 1947, when India became independent, which continues even today. Management of this complex fort necessitated the need for an integrated approach, where in the needs of the official and non official stakeholders were addressed. The understanding of the inherent values and significance of this site was arrived through a systematic methodology of inventorying and mapping of information. Hampi, located in southern part of India, is a living heritage site inscribed in the World Heritage list in 1986. The site comprises of settlements, built heritage structures, traditional water systems, forest, agricultural fields and the remains of the metropolis of the 16th century Vijayanagar empire. As Hampi is a living heritage site having traditional systems of management and practices, the aim has been to include these practices in the current management so that there is continuity in belief, thought and practice. The existing national, regional and local planning instruments have been examined and the local concerns have been addressed.A comprehensive understanding of the site, achieved through an integrated model, is being translated to an action plan which safeguards the inherent values of the site. This paper also examines the case of the 20th century heritage building of National Archives of India, Delhi and protection of a 12th century Tomb of Sultan Ghari located in south Delhi. A comprehensive understanding of the site, lead to the delineation of the Archaeological Park of Sultan Ghari, in the current Master Plan for Delhi, for the protection of the tomb and the settlement around it. Through this study it is concluded that the approach of Integrated Conservation has enabled decision making that sustains the values of these complex heritage sites in Indian context.

Keywords: conservation, integrated, management, approach

Procedia PDF Downloads 65
619 Transnational Solidarity and Philippine Society: A Probe on Trafficked Filipinos and Economic Inequality

Authors: Shierwin Agagen Cabunilas

Abstract:

Countless Filipinos are reeling in dire economic inequality while many others are victims of human trafficking. Where there is extreme economic inequality, majority of the Filipinos are deprived of basic needs to have a good life, i.e., decent shelter, safe environment, food, quality education, social security, etc. The problem on human trafficking poses a scandal and threat in respect to human rights and dignity of a person on matters of sex, gender, ethnicity and race among others. The economic inequality and trafficking in persons are social pathologies that needed considerable amount of attention and visible solution both in the national and international level. However, the Philippine government seems falls short in terms of goals to lessen, if not altogether eradicate, the dire fate of many Filipinos. The lack of solidarity among Filipinos seems to further aggravate injustice and create hindrances to economic equity and protection of Filipinos from syndicated crimes, i.e., human trafficking. Indifference towards the welfare and well-being of the Filipino people trashes them into an unending cycle of marginalization and neglect. A transnational solidaristic action in response to these concerns is imperative. The subsequent sections will first discuss the notion of solidarity and the motivating factors for collective action. While solidarity has been previously thought of as stemming from and for one’s own community and people, it can be argued as a value that defies borders. Solidarity bridges peoples of diverse societies and cultures. Although there are limits to international interventions on another’s sovereignty, such as, internal political autonomy, transnational solidarity may not be an opposition to solidarity with people suffering injustices. Governments, nations and institutions can work together in securing justice. Solidarity thus is a positive political action that can best respond to issues of economic, class, racial and gender injustices. This is followed by a critical analysis of some data on Philippine economic inequality and human trafficking and link the place of transnational solidaristic arrangements. Here, the present work is interested on the normative aspect of the problem. It begins with the section on economic inequality and subsequently, human trafficking. It is argued that a transnational solidarity is vital in assisting the Philippine governing bodies and authorities to seriously execute innovative economic policies and developmental programs that are justice and egalitarian oriented. Transnational solidarity impacts a corrective measure in the economic practices, and activities of the Philippine government. Moreover, it is suggested that in order to mitigate Philippine economic inequality and human trafficking concerns it involves a (a) historical analysis of systems that brought about economic anomalies, (b) renewed and innovated economic policies, (c) mutual trust and relatively high transparency, and (d) grass-root and context-based approach. In conclusion, the findings are briefly sketched and integrated in an optimistic view that transnational solidarity is capable of influencing Philippine governing bodies towards socio-economic transformation and development of the lives of Filipinos.

Keywords: Philippines, Filipino, economic inequality, human trafficking, transnational solidarity

Procedia PDF Downloads 255
618 The Efficacy of Video Education to Improve Treatment or Illness-Related Knowledge in Patients with a Long-Term Physical Health Condition: A Systematic Review

Authors: Megan Glyde, Louise Dye, David Keane, Ed Sutherland

Abstract:

Background: Typically patient education is provided either verbally, in the form of written material, or with a multimedia-based tool such as videos, CD-ROMs, DVDs, or via the internet. By providing patients with effective educational tools, this can help to meet their information needs and subsequently empower these patients and allow them to participate within medical-decision making. Video education may have some distinct advantages compared to other modalities. For instance, whilst eHealth is emerging as a promising modality of patient education, an individual’s ability to access, read, and navigate through websites or online modules varies dramatically in relation to health literacy levels. Literacy levels may also limit patients’ ability to understand written education, whereas video education can be watched passively by patients and does not require high literacy skills. Other benefits of video education include that the same information is provided consistently to each patient, it can be a cost-effective method after the initial cost of producing the video, patients can choose to watch the videos by themselves or in the presence of others, and they can pause and re-watch videos to suit their needs. Health information videos are not only viewed by patients in formal educational sessions, but are increasingly being viewed on websites such as YouTube. Whilst there is a lot of anecdotal and sometimes misleading information on YouTube, videos from government organisations and professional associations contain trustworthy and high-quality information and could enable YouTube to become a powerful information dissemination platform for patients and carers. This systematic review will examine the efficacy of video education to improve treatment or illness-related knowledge in patients with various long-term conditions, in comparison to other modalities of education. Methods: Only studies which match the following criteria will be included: participants will have a long-term physical health condition, video education will aim to improve treatment or illness related knowledge and will be tested in isolation, and the study must be a randomised controlled trial. Knowledge will be the primary outcome measure, with modality preference, anxiety, and behaviour change as secondary measures. The searches have been conducted in the following databases: OVID Medline, OVID PsycInfo, OVID Embase, CENTRAL and ProQuest, and hand searching for relevant published and unpublished studies has also been carried out. Screening and data extraction will be conducted independently by 2 researchers. Included studies will be assessed for their risk of bias in accordance with Cochrane guidelines, and heterogeneity will also be assessed before deciding whether a meta-analysis is appropriate or not. Results and Conclusions: Appropriate synthesis of the studies in relation to each outcome measure will be reported, along with the conclusions and implications.

Keywords: long-term condition, patient education, systematic review, video

Procedia PDF Downloads 93
617 The Concept of Path in Original Buddhism and the Concept of Psychotherapeutic Improvement

Authors: Beth Jacobs

Abstract:

The landmark movement of Western clinical psychology in the 20th century was the development of psychotherapy. The landmark movement of clinical psychology in the 21st century will be the absorption of meditation practices from Buddhist psychology. While millions of people explore meditation and related philosophy, very few people are exposed to the materials of original Buddhism on this topic, especially to the Theravadan Abhidharma. The Abhidharma is an intricate system of lists and matrixes that were used to understand and remember Buddha’s teaching. The Abhidharma delineates the first psychological system of Buddhism, how the mind works in the universe of reality and why meditation training strengthens and purifies the experience of life. Its lists outline the psychology of mental constructions, perception, emotion and cosmological causation. While the Abhidharma is technical, elaborate and complex, its essential purpose relates to the central purpose of clinical psychology: to relieve human suffering. Like Western depth psychology, the methodology rests on understanding underlying processes of consciousness and perception. What clinical psychologists might describe as therapeutic improvement, the Abhidharma delineates as a specific pathway of purified actions of consciousness. This paper discusses the concept of 'path' as presented in aspects of the Theravadan Abhidharma and relates this to current clinical psychological views of therapy outcomes and gains. The core path in Buddhism is the Eight-Fold Path, which is the fourth noble truth and the launching of activity toward liberation. The path is not composed of eight ordinal steps; it’s eight-fold and is described as opening the way, not funneling choices. The specific path in the Abhidharma is described in many steps of development of consciousness activities. The path is not something a human moves on, but something that moments of consciousness develop within. 'Cittas' are extensively described in the Abhidharma as the atomic-level unit of a raw action of consciousness touching upon an object in a field, and there are 121 types of cittas categorized. The cittas are embedded in the mental factors, which could be described as the psychological packaging elements of our experiences of consciousness. Based on these constellations of infinitesimal, linked occurrences of consciousness, citta are categorized by dimensions of purification. A path is a chain of citta developing through causes and conditions. There are no selves, no pronouns in the Abhidharma. Instead of me walking a path, this is about a person working with conditions to cultivate a stream of consciousness that is pure, immediate, direct and generous. The same effort, in very different terms, informs the work of most psychotherapies. Depth psychology seeks to release the bound, unconscious elements of mental process into the clarity of realization. Cognitive and behavioral psychologies work on breaking down automatic thought valuations and actions, changing schemas and interpersonal dynamics. Understanding how the original Buddhist concept of positive human development relates to the clinical psychological concept of therapy weaves together two brilliant systems of thought on the development of human well being.

Keywords: Abhidharma, Buddhist path, clinical psychology, psychotherapeutic outcome

Procedia PDF Downloads 177