Search results for: simple connected graph
41 Microplastic Concentrations and Fluxes in Urban Compartments: A Systemic Approach at the Scale of the Paris Megacity
Authors: Rachid Dris, Robin Treilles, Max Beaurepaire, Minh Trang Nguyen, Sam Azimi, Vincent Rocher, Johnny Gasperi, Bruno Tassin
Abstract:
Microplastic sources and fluxes in urban catchments are only poorly studied. Most often, the approaches taken focus on a single source and only carry out a description of the contamination levels and type (shape, size, polymers). In order to gain an improved knowledge of microplastic inputs at urban scales, estimating and comparing various fluxes is necessary. The Laboratoire Eau, Environnement et Systèmes Urbains (LEESU), the Laboratoire Eau Environnement (LEE) and the SIAAP (Service public de l’assainissement francilien) initiated several projects to investigate different urban sources and flows of microplastics. A systemic approach is undertaken at the scale of Paris Megacity, and several compartments are considered, including atmospheric fallout, wastewater treatments plants, runoff and combined sewer overflows. These investigations are carried out within the Limnoplast and OPUR projects. Atmospheric fallout was sampled during consecutive periods ranging from 2 to 3 weeks with a stainless-steel funnel. Both wet and dry periods were considered. Different treatment steps were sampled in 2 wastewater treatment plants (Seine-Amont for activated sludge and Seine-Centre for biofiltration) of the SIAAP, including sludge samples. Microplastics were also investigated in combined sewer overflows as well as in stormwater at the outlet suburban catchment (Sucy-en-Brie, France) during four rain events. Samples are treated using hydroperoxide digestion (H₂O₂ 30 %) in order to reduce organic material. Microplastics are then extracted from the samples with a density separation step using NaI (d=1.6 g.cm⁻³). Samples are filtered on metallic filters with a porosity of 14 µm between steps to separate them from the solutions (H₂O₂ and NaI). The last filtration was carried out on alumina filters. Infrared mapping analysis (using a micro-FTIR with an MCT detector) is performed on each alumina filter. The resulting maps are analyzed using a microplastic analysis software simple, developed by Aalborg University, Denmark and Alfred Wegener Institute, Germany. Blanks were systematically carried out to consider sample contamination. This presentation aims at synthesizing the data found in the various projects. In order to carry out a systemic approach and compare the various inputs, all the data were converted into annual microplastic fluxes (number of microplastics per year), and extrapolated to the Parisian agglomeration. PP, PE and alkyd are the most prevalent polymers found in storm water samples. Rain intensity and microplastic concentrations did not show any clear correlation. Considering the runoff volumes and the impervious surface area of the studied catchment, a flux of 4*107–9*107 MPs.yr⁻¹.ha⁻¹ was estimated. Samples of wastewater treatment plants and atmospheric fallout are currently being analyzed in order to finalize this assessment. The representativeness of such samplings and uncertainties related to the extrapolations will be discussed and gaps in knowledge will be identified. The data provided by such an approach will help to prioritize future research as well as policy efforts.Keywords: microplastics, atmosphere, wastewater, urban runoff, Paris megacity, urban waters
Procedia PDF Downloads 18040 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation
Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy
Abstract:
The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis
Procedia PDF Downloads 40639 Potential of Hyperion (EO-1) Hyperspectral Remote Sensing for Detection and Mapping Mine-Iron Oxide Pollution
Authors: Abderrazak Bannari
Abstract:
Acid Mine Drainage (AMD) from mine wastes and contaminations of soils and water with metals are considered as a major environmental problem in mining areas. It is produced by interactions of water, air, and sulphidic mine wastes. This environment problem results from a series of chemical and biochemical oxidation reactions of sulfide minerals e.g. pyrite and pyrrhotite. These reactions lead to acidity as well as the dissolution of toxic and heavy metals (Fe, Mn, Cu, etc.) from tailings waste rock piles, and open pits. Soil and aquatic ecosystems could be contaminated and, consequently, human health and wildlife will be affected. Furthermore, secondary minerals, typically formed during weathering of mine waste storage areas when the concentration of soluble constituents exceeds the corresponding solubility product, are also important. The most common secondary mineral compositions are hydrous iron oxide (goethite, etc.) and hydrated iron sulfate (jarosite, etc.). The objectives of this study focus on the detection and mapping of MIOP in the soil using Hyperion EO-1 (Earth Observing - 1) hyperspectral data and constrained linear spectral mixture analysis (CLSMA) algorithm. The abandoned Kettara mine, located approximately 35 km northwest of Marrakech city (Morocco) was chosen as study area. During 44 years (from 1938 to 1981) this mine was exploited for iron oxide and iron sulphide minerals. Previous studies have shown that Kettara surrounding soils are contaminated by heavy metals (Fe, Cu, etc.) as well as by secondary minerals. To achieve our objectives, several soil samples representing different MIOP classes have been resampled and located using accurate GPS ( ≤ ± 30 cm). Then, endmembers spectra were acquired over each sample using an Analytical Spectral Device (ASD) covering the spectral domain from 350 to 2500 nm. Considering each soil sample separately, the average of forty spectra was resampled and convolved using Gaussian response profiles to match the bandwidths and the band centers of the Hyperion sensor. Moreover, the MIOP content in each sample was estimated by geochemical analyses in the laboratory, and a ground truth map was generated using simple Kriging in GIS environment for validation purposes. The acquired and used Hyperion data were corrected for a spatial shift between the VNIR and SWIR detectors, striping, dead column, noise, and gain and offset errors. Then, atmospherically corrected using the MODTRAN 4.2 radiative transfer code, and transformed to surface reflectance, corrected for sensor smile (1-3 nm shift in VNIR and SWIR), and post-processed to remove residual errors. Finally, geometric distortions and relief displacement effects were corrected using a digital elevation model. The MIOP fraction map was extracted using CLSMA considering the entire spectral range (427-2355 nm), and validated by reference to the ground truth map generated by Kriging. The obtained results show the promising potential of the proposed methodology for the detection and mapping of mine iron oxide pollution in the soil.Keywords: hyperion eo-1, hyperspectral, mine iron oxide pollution, environmental impact, unmixing
Procedia PDF Downloads 22838 Impact of Marangoni Stress and Mobile Surface Charge on Electrokinetics of Ionic Liquids Over Hydrophobic Surfaces
Authors: Somnath Bhattacharyya
Abstract:
The mobile adsorbed surface charge on hydrophobic surfaces can modify the velocity slip condition as well as create a Marangoni stress at the interface. The functionalized hydrophobic walls of micro/nanopores, e.g., graphene nanochannels, may possess physio-sorbed ions. The lateral mobility of the physisorbed absorbed ions creates a friction force as well as an electric force, leading to a modification in the velocity slip condition at the hydrophobic surface. In addition, the non-uniform distribution of these surface ions creates a surface tension gradient, leading to a Marangoni stress. The impact of the mobile surface charge on streaming potential and electrochemical energy conversion efficiency in a pressure-driven flow of ionized liquid through the nanopore is addressed. Also, enhanced electro-osmotic flow through the hydrophobic nanochannel is also analyzed. The mean-filed electrokinetic model is modified to take into account the short-range non-electrostatic steric interactions and the long-range Coulomb correlations. The steric interaction is modeled by considering the ions as charged hard spheres of finite radius suspended in the electrolyte medium. The electrochemical potential is modified by including the volume exclusion effect, which is modeled based on the BMCSL equation of state. The electrostatic correlation is accounted for in the ionic self-energy. The extremal of the self-energy leads to a fourth-order Poisson equation for the electric field. The ion transport is governed by the modified Nernst-Planck equation, which includes the ion steric interactions; born force arises due to the spatial variation of the dielectric permittivity and the dielectrophoretic force on the hydrated ions. This ion transport equation is coupled with the Navier-Stokes equation describing the flow of the ionized fluid and the 3fourth-order Poisson equation for the electric field. We numerically solve the coupled set of nonlinear governing equations along with the prescribed boundary conditions by adopting a control volume approach over a staggered grid arrangement. In the staggered grid arrangements, velocity components are stored on the midpoint of the cell faces to which they are normal, whereas the remaining scalar variables are stored at the center of each cell. The convection and electromigration terms are discretized at each interface of the control volumes using the total variation diminishing (TVD) approach to capture the strong convection resulting from the highly enhanced fluid flow due to the modified model. In order to link pressure to the continuity equation, we adopt a pressure correction-based iterative SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) algorithm, in which the discretized continuity equation is converted to a Poisson equation involving pressure correction terms. Our results show that the physisorbed ions on a hydrophobic surface create an enhanced slip velocity when streaming potential, which enhances the convection current. However, the electroosmotic flow attenuates due to the mobile surface ions.Keywords: microfluidics, electroosmosis, streaming potential, electrostatic correlation, finite sized ions
Procedia PDF Downloads 7237 Monte Carlo Risk Analysis of a Carbon Abatement Technology
Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele
Abstract:
Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5 cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbo machinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50 % cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low temperature heat exchanger LTHX (referred to by some authors as air pre-heater the mixed conductive membrane responsible for oxygen transfer and the high temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. This paper discusses techno-economic analysis of four possible layouts of the AZEP cycle. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout) – AZEP 85 % (85 % CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine– AZEP 85 % (85 % CO2 capture). This paper discusses Montecarlo risk analysis of four possible layouts of the AZEP cycle.Keywords: gas turbine, global warming, green house gases, power plants
Procedia PDF Downloads 47136 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 13335 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis
Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon
Abstract:
Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles
Procedia PDF Downloads 38834 Influence of Atmospheric Pollutants on Child Respiratory Disease in Cartagena De Indias, Colombia
Authors: Jose A. Alvarez Aldegunde, Adrian Fernandez Sanchez, Matthew D. Menden, Bernardo Vila Rodriguez
Abstract:
Up to five statistical pre-processings have been carried out considering the pollutant records of the stations present in Cartagena de Indias, Colombia, also taking into account the childhood asthma incidence surveys conducted in hospitals in the city by the Health Ministry of Colombia for this study. These pre-processings have consisted of different techniques such as the determination of the quality of data collection, determination of the quality of the registration network, identification and debugging of errors in data collection, completion of missing data and purified data, as well as the improvement of the time scale of records. The characterization of the quality of the data has been conducted by means of density analysis of the pollutant registration stations using ArcGis Software and through mass balance techniques, making it possible to determine inconsistencies in the records relating the registration data between stations following the linear regression. The results obtained in this process have highlighted the positive quality in the pollutant registration process. Consequently, debugging of errors has allowed us to identify certain data as statistically non-significant in the incidence and series of contamination. This data, together with certain missing records in the series recorded by the measuring stations, have been completed by statistical imputation equations. Following the application of these prior processes, the basic series of incidence data for respiratory disease and pollutant records have allowed the characterization of the influence of pollutants on respiratory diseases such as, for example, childhood asthma. This characterization has been carried out using statistical correlation methods, including visual correlation, simple linear regression correlation and spectral analysis with PAST Software which identifies maximum periodicity cycles and minimums under the formula of the Lomb periodgram. In relation to part of the results obtained, up to eleven maximums and minimums considered contemporary between the incidence records and the particles have been identified taking into account the visual comparison. The spectral analyses that have been performed on the incidence and the PM2.5 have returned a series of similar maximum periods in both registers, which are at a maximum during a period of one year and another every 25 days (0.9 and 0.07 years). The bivariate analysis has managed to characterize the variable "Daily Vehicular Flow" in the ninth position of importance of a total of 55 variables. However, the statistical correlation has not obtained a favorable result, having obtained a low value of the R2 coefficient. The series of analyses conducted has demonstrated the importance of the influence of pollutants such as PM2.5 in the development of childhood asthma in Cartagena. The quantification of the influence of the variables has been able to determine that there is a 56% probability of dependence between PM2.5 and childhood respiratory asthma in Cartagena. Considering this justification, the study could be completed through the application of the BenMap Software, throwing a series of spatial results of interpolated values of the pollutant contamination records that exceeded the established legal limits (represented by homogeneous units up to the neighborhood level) and results of the impact on the exacerbation of pediatric asthma. As a final result, an economic estimate (in Colombian Pesos) of the monthly and individual savings derived from the percentage reduction of the influence of pollutants in relation to visits to the Hospital Emergency Room due to asthma exacerbation in pediatric patients has been granted.Keywords: Asthma Incidence, BenMap, PM2.5, Statistical Analysis
Procedia PDF Downloads 11633 An Action Toolkit for Health Care Services Driving Disability Inclusion in Universal Health Coverage
Authors: Jill Hanass-Hancock, Bradley Carpenter, Samantha Willan, Kristin Dunkle
Abstract:
Access to quality health care for persons with disabilities is the litmus test in our strive toward universal health coverage. Persons with disabilities experience a variety of health disparities related to increased health risks, greater socioeconomic challenges, and persistent ableism in the provision of health care. In low- and middle-income countries, the support needed to address the diverse needs of persons with disabilities and close the gaps in inclusive and accessible health care can appear overwhelming to staff with little knowledge and tools available. An action-orientated disability inclusion toolkit for health facilities was developed through consensus-building consultations and field testing in South Africa. The co-creation of the toolkit followed a bottom-up approach with healthcare staff and persons with disabilities in two developmental cycles. In cycle one, a disability facility assessment tool was developed to increase awareness of disability accessibility and service delivery gaps in primary healthcare services in a simple and action-orientated way. In cycle two, an intervention menu was created, enabling staff to respond to identified gaps and improve accessibility and inclusion. Each cycle followed five distinct steps of development: a review of needs and existing tools, design of the draft tool, consensus discussion to adapt the tool, pilot-testing and adaptation of the tool, and identification of the next steps. The continued consultations, adaptations, and field-testing allowed the team to discuss and test several adaptations while co-creating a meaningful and feasible toolkit with healthcare staff and persons with disabilities. This approach led to a simplified tool design with ‘key elements’ needed to achieve universal health coverage: universal design of health facilities, reasonable accommodation, health care worker training, and care pathway linkages. The toolkit was adapted for paper or digital data entry, produces automated, instant facility reports, and has easy-to-use training guides and online modules. The cyclic approach enabled the team to respond to emerging needs. The pilot testing of the facility assessment tool revealed that healthcare workers took significant actions to change their facilities after an assessment. However, staff needed information on how to improve disability accessibility and inclusion, where to acquire accredited training, and how to improve disability data collection, referrals, and follow-up. Hence, intervention options were needed for each ‘key element’. In consultation with representatives from the health and disability sectors, tangible and feasible solutions/interventions were identified. This process included the development of immediate/low-cost and long-term solutions. The approach gained buy-in from both sectors, who called for including the toolkit in the standard quality assessments for South Africa’s health care services. Furthermore, the process identified tangible solutions for each ‘key element’ and highlighted where research and development are urgently needed. The cyclic and consultative approach enabled the development of a feasible facility assessment tool and a complementary intervention menu, moving facilities toward universal health coverage for and persons with disabilities in low- or better-resourced contexts while identifying gaps in the availability of interventions.Keywords: public health, disability, accessibility, inclusive health care, universal health coverage
Procedia PDF Downloads 7732 City on Fire: An Ethnography of Play and Politics in Johannesburg Nightclubs
Authors: Beth Vale
Abstract:
Academic research has often neglected the city after dark. Surprisingly little consideration has been given to the every night life of cities: the spatial tactics and creative insurgencies of urban residents when night falls. The focus on ‘pleasure’ in the nocturnal city has often negated the subtle politics of night-time play, embedded in expressions of identity, attachment and resistance. This paper investigates Johannesburg nightclubs as sites of quotidian political labour, through which young people contest social space and their place in it, thereby contributing to the city’s effective and socio-political cartography. The tactical remodelling of the nocturnal city through nightclubbing traces lines of desire (material, emotional, sexual), affiliation, and fear. These in turn map onto young people’s expressions of their social and political identities, as well as their attempts at place-making in a ‘post-apartheid’ context. By examining the micro-politics of the cities' nightclubs, this paper speaks back to an earlier post-94 literature, which regularly characterised Johannesburg youth as superficial, individualist and idealistic. Similarly, some might position nightclubs as sites of frivolous consumption or liberatory permissiveness. Yet because nightclub spaces are racialised, classed and gendered, historically-signified and socially regulated, they are also profoundly political. Through ordinary encounters on the cities' dancefloors, young Jo’burgers are imagining, contesting and negotiating their socio-political identities and indeed their claims to the city. Meanwhile, the politics of this generation of youth, who are increasingly critical of the utopian post-apartheid city, are being increasingly inserted and coopted into night-time cultures. Data for this study was gathered through five months of ethnographic fieldwork in Johannesburg nightclubs, including over 120 hours of participant observation and in-depth interviews with organisers and partygoers. Interviewees recognised that parties, rather than being simple frivolity, are a cacophony of celebration, mourning, worship, rage, rebellion and attachment. Countering standard associations between partying and escapism, party planners, venue owners and nightclub audiences were infusing night-time infrastructures with the aesthetics of politics and protest. Not unlike parties, local political assemblies so often rely on music, dance, the occupation of space, and a heaving crowd. References to social movements, militancy and anti-establishment emerged in nightclub themes, dress codes and décor. Metaphors of fire crossed over between party and protest, both of which could be described as having ‘been lit’ or having ‘brought flames’. More so, young people’s articulations of the city’s night-time geography, and their place in it, reflected articulations of race, class and ideological affiliation. The location, entrance fees and stylistic choices of one’s chosen club destination demarcated who was welcome, while also signalling membership to a particular politics (whether progressive or materialistic, inclusive or elitist, mainstream or counter-culture). Because of their ability to divide and unite, aggravate and titillate, mask and reveal, club cultures might offer a mirror to the complex socialities of a generation of Jo’burg youth, as they inhabit, and bring into being, a contemporary South African city.Keywords: affect, Johannesburg, nightclub, nocturnal city, politics
Procedia PDF Downloads 22731 Towards Achieving Total Decent Work: Occupational Safety and Health Issues, Problems and Concerns of Filipino Domestic Workers
Authors: Ronahlee Asuncion
Abstract:
The nature of their work and employment relationship make domestic workers easy prey to abuse, maltreatment, and exploitation. Considering their plight, this research was conceptualized and examined the: a) level of awareness of Filipino domestic workers on occupational safety and health (OSH); b) their issues/problems/concerns on OSH; c) their intervention strategies at work to address OSH related issues/problems/concerns; d) issues/problems/concerns of government, employers, and non-government organizations with regard to implementation of OSH to Filipino domestic workers; e) the role of government, employers and non-government organizations to help Filipino domestic workers address OSH related issues/problems/concerns; and f) the necessary policy amendments/initiatives/programs to address OSH related issues/problems/concerns of Filipino domestic workers. The study conducted a survey using non-probability sampling, two focus group discussions, two group interviews, and fourteen face-to-face interviews. These were further supplemented with an email correspondence to a key informant based in another country. Books, journals, magazines, and relevant websites further substantiated and enriched data of the research. Findings of the study point to the fact that domestic workers have low level of awareness on OSH because of poor information drive, fragmented implementation of the Domestic Workers Act, inactive campaign at the barangay level, weakened advocacy for domestic workers, absence of law on OSH for domestic workers, and generally low safety culture in the country among others. Filipino domestic workers suffer from insufficient rest, long hours of work, heavy workload, occupational stress, poor accommodation, insufficient hours of sleep, deprivation of day off, accidents and injuries such as cuts, burns, slipping, stumbling, electrical grounding, and fire, verbal, physical and sexual abuses, lack of medical assistance, none provision of personal protective equipment (PPE), absence of knowledge on the proper way of lifting, working at heights, and insufficient food provision. They also suffer from psychological problems because of separation from one’s family, limited mobility in the household where they work, injuries and accidents from using advanced home appliances and taking care of pets, low self-esteem, ergonomic problems, the need to adjust to all household members who have various needs and demands, inability to voice their complaints, drudgery of work, and emotional stress. With regard to illness or health problems, they commonly experience leg pains, back pains, and headaches. In the absence of intervention programs like those offered in the formal employment set up, domestic workers resort to praying, turn to family, relatives and friends for social and emotional support, connect with them through social media like Facebook which also serve as a means of entertainment to them, talk to their employer, and just try to be optimistic about their situation. Promoting OSH for domestic workers is very challenging and complicated because of interrelated factors such as cultural, knowledge, attitudinal, relational, social, resource, economic, political, institutional and legal problems. This complexity necessitates using a holistic and integrated approach as this is not a problem requiring simple solutions. With this recognition comes the full understanding that its success involves the action and cooperation of all duty bearers in attaining decent work for domestic workers.Keywords: decent work, Filipino domestic workers, occupational safety and health, working conditions
Procedia PDF Downloads 26130 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 22329 Septic Pulmonary Emboli as a Complication of Peripheral Venous Cannula Insertion
Authors: Ankita Baidya, Vanishri Ganakumar, Ranveer S. Jadon, Piyush Ranjan, Rita Sood
Abstract:
Septic embolism can have varied presentations and clinical considerations. Infected central venous catheters are commonly associated with septic emboli but peripheral vascular catheters are rarely implicated. We describe a rare case of septic pulmonary emboli related to infected peripheral venous cannulation caused by an unusual etiological agent. A young male presented with complaints of fever, productive cough, sudden onset shortness of breath and cellulitis in both the upper limbs. He was recently hospitalised for dengue fever and administered intravenous fluids through peripheral venous line. The patient was febrile, tachypneic and in respiratory distress, there were multiple pus filled bullae in left hand alongwith swelling and erythema involving right forearm that started at the site of cannulation. Chest examination showed active accessory muscles of respiration, stony dull percussion at the base of right lung and decreased breath sounds at right infrascapular, infraaxillary and mammary area. Other system examination was within normal limits. Chest X-ray revealed bilateral multiple patchy heterogenous peripheral opacities and infiltrates with right-sided pleural effusion. Contrast-enhanced computed tomography (CECT) chest showed feeding vessel sign confirming the diagnosis as septic emboli. Venous Doppler and 2D-echocardiogarm were normal. Laboratory findings showed marked leucocytosis (22000/mm3). Pus aspirate, blood sample, and sputum sample were sent for microbiological testing. The patient was started empirically on ceftriaxone, vancomycin, and clindamycin. The Pus culture and sputum culture showed Klebsiella pneumoniae sensitive to cefoperazone-sulbactum, piperacillin-tazobactum, meropenem and amikacin. The antibiotics were modified accordingly to antimicrobial sensitivity profile to Cefoperazone-sulbactum. Bronchoalveolar lavage (BAL) was done and sent for microbiological investigations. BAL culture showed Klebsiella pneumoniae with same antimicrobial resistance profile. On day 6 of starting cefoperazone-sulbactum, he became afebrile. The skin lesions improved significantly. He was administered 2 weeks of cefoperazone–sulbactum and discharged on oral faropenem for 4 weeks. At the time of discharge, TLC was 11200/mm3 with marked radiological resolution of infection and healed skin lesions. He was kept in regular follow up. Chest X-ray and skin lesions showed complete resolution after 8 weeks. Till date, only couple of case reports of septic emboli through peripheral intravenous line have been reported in English literature. This case highlights that a simple procedure of peripheral intravenous cannulation can lead to catastrophic complication of septic pulmonary emboli and widespread cellulitis if not done with proper care and precautions. Also, the usual pathogens in such clinical settings are gram positive bacteria, but with the history of recent hospitalization, empirical therapy should also cover drug resistant gram negative microorganisms. It also emphasise the importance of appropriate healthcare practices to be taken care during all procedures.Keywords: antibiotics, cannula, Klebsiella pneumoniae, septic emboli
Procedia PDF Downloads 16028 Nonlinear Homogenized Continuum Approach for Determining Peak Horizontal Floor Acceleration of Old Masonry Buildings
Authors: Andreas Rudisch, Ralf Lampert, Andreas Kolbitsch
Abstract:
It is a well-known fact among the engineering community that earthquakes with comparatively low magnitudes can cause serious damage to nonstructural components (NSCs) of buildings, even when the supporting structure performs relatively well. Past research works focused mainly on NSCs of nuclear power plants and industrial plants. Particular attention should also be given to architectural façade elements of old masonry buildings (e.g. ornamental figures, balustrades, vases), which are very vulnerable under seismic excitation. Large numbers of these historical nonstructural components (HiNSCs) can be found in highly frequented historical city centers and in the event of failure, they pose a significant danger to persons. In order to estimate the vulnerability of acceleration sensitive HiNSCs, the peak horizontal floor acceleration (PHFA) is used. The PHFA depends on the dynamic characteristics of the building, the ground excitation, and induced nonlinearities. Consequently, the PHFA can not be generalized as a simple function of height. In the present research work, an extensive case study was conducted to investigate the influence of induced nonlinearity on the PHFA for old masonry buildings. Probabilistic nonlinear FE time-history analyses considering three different hazard levels were performed. A set of eighteen synthetically generated ground motions was used as input to the structure models. An elastoplastic macro-model (multiPlas) for nonlinear homogenized continuum FE-calculation was calibrated to multiple scales and applied, taking specific failure mechanisms of masonry into account. The macro-model was calibrated according to the results of specific laboratory and cyclic in situ shear tests. The nonlinear macro-model is based on the concept of multi-surface rate-independent plasticity. Material damage or crack formation are detected by reducing the initial strength after failure due to shear or tensile stress. As a result, shear forces can only be transmitted to a limited extent by friction when the cracking begins. The tensile strength is reduced to zero. The first goal of the calibration was the consistency of the load-displacement curves between experiment and simulation. The calibrated macro-model matches well with regard to the initial stiffness and the maximum horizontal load. Another goal was the correct reproduction of the observed crack image and the plastic strain activities. Again the macro-model proved to work well in this case and shows very good correlation. The results of the case study show that there is significant scatter in the absolute distribution of the PHFA between the applied ground excitations. An absolute distribution along the normalized building height was determined in the framework of probability theory. It can be observed that the extent of nonlinear behavior varies for the three hazard levels. Due to the detailed scope of the present research work, a robust comparison with code-recommendations and simplified PHFA distributions are possible. The chosen methodology offers a chance to determine the distribution of PHFA along the building height of old masonry structures. This permits a proper hazard assessment of HiNSCs under seismic loads.Keywords: nonlinear macro-model, nonstructural components, time-history analysis, unreinforced masonry
Procedia PDF Downloads 16827 Braille Lab: A New Design Approach for Social Entrepreneurship and Innovation in Assistive Tools for the Visually Impaired
Authors: Claudio Loconsole, Daniele Leonardis, Antonio Brunetti, Gianpaolo Francesco Trotta, Nicholas Caporusso, Vitoantonio Bevilacqua
Abstract:
Unfortunately, many people still do not have access to communication, with specific regard to reading and writing. Among them, people who are blind or visually impaired, have several difficulties in getting access to the world, compared to the sighted. Indeed, despite technology advancement and cost reduction, nowadays assistive devices are still expensive such as Braille-based input/output systems which enable reading and writing texts (e.g., personal notes, documents). As a consequence, assistive technology affordability is fundamental in supporting the visually impaired in communication, learning, and social inclusion. This, in turn, has serious consequences in terms of equal access to opportunities, freedom of expression, and actual and independent participation to a society designed for the sighted. Moreover, the visually impaired experience difficulties in recognizing objects and interacting with devices in any activities of daily living. It is not a case that Braille indications are commonly reported only on medicine boxes and elevator keypads. Several software applications for the automatic translation of written text into speech (e.g., Text-To-Speech - TTS) enable reading pieces of documents. However, apart from simple tasks, in many circumstances TTS software is not suitable for understanding very complicated pieces of text requiring to dwell more on specific portions (e.g., mathematical formulas or Greek text). In addition, the experience of reading\writing text is completely different both in terms of engagement, and from an educational perspective. Statistics on the employment rate of blind people show that learning to read and write provides the visually impaired with up to 80% more opportunities of finding a job. Especially in higher educational levels, where the ability to digest very complex text is key, accessibility and availability of Braille plays a fundamental role in reducing drop-out rate of the visually impaired, thus affecting the effectiveness of the constitutional right to get access to education. In this context, the Braille Lab project aims at overcoming these social needs by including affordability in designing and developing assistive tools for visually impaired people. In detail, our awarded project focuses on a technology innovation of the operation principle of existing assistive tools for the visually impaired leaving the Human-Machine Interface unchanged. This can result in a significant reduction of the production costs and consequently of tool selling prices, thus representing an important opportunity for social entrepreneurship. The first two assistive tools designed within the Braille Lab project following the proposed approach aims to provide the possibility to personally print documents and handouts and to read texts written in Braille using refreshable Braille display, respectively. The former, named ‘Braille Cartridge’, represents an alternative solution for printing in Braille and consists in the realization of an electronic-controlled dispenser printing (cartridge) which can be integrated within traditional ink-jet printers, in order to leverage the efficiency and cost of the device mechanical structure which are already being used. The latter, named ‘Braille Cursor’, is an innovative Braille display featuring a substantial technology innovation by means of a unique cursor virtualizing Braille cells, thus limiting the number of active pins needed for Braille characters.Keywords: Human rights, social challenges and technology innovations, visually impaired, affordability, assistive tools
Procedia PDF Downloads 27326 A Quantitative Case Study Analysis of Store Format Contributors to U.S. County Obesity Prevalence in Virginia
Authors: Bailey Houghtaling, Sarah Misyak
Abstract:
Food access; the availability, affordability, convenience, and desirability of food and beverage products within communities, is influential on consumers’ purchasing and consumption decisions. These variables may contribute to lower dietary quality scores and a higher obesity prevalence documented among rural and disadvantaged populations in the United States (U.S.). Current research assessing linkages between food access and obesity outcomes has primarily focused on distance to a traditional grocery/supermarket store as a measure of optimality. However, low-income consumers especially, including U.S. Department of Agriculture’s Supplemental Nutrition Assistance Program (SNAP) participants, seem to utilize non-traditional food store formats with greater frequency for household dietary needs. Non-traditional formats have been associated with less nutritious food and beverage options and consumer purchases that are high in saturated fats, added sugars, and sodium. Authors’ formative research indicated differences by U.S. region and rurality in the distribution of traditional and non-traditional SNAP-authorized food store formats. Therefore, using Virginia as a case study, the purpose of this research was to determine if a relationship between store format, rurality, and obesity exists. This research applied SNAP-authorized food store data (food access points for SNAP as well as non-SNAP consumers) and obesity prevalence data by Virginia county using publicly available databases: (1) SNAP Retailer Locator, and; (2) U.S. County Health Rankings. The alpha level was set a priori at 0.05. All Virginia SNAP-authorized stores (n=6,461) were coded by format – grocery, drug, mass merchandiser, club, convenience, dollar, supercenter, specialty, farmers market, independent grocer, and non-food store. Simple linear regression was applied primarily to assess the relationship between store format and obesity. Thereafter, multiple variables were added to the regression to account for potential moderating relationships (e.g., county income, rurality). Convenience, dollar, non-food or restaurant, mass merchandiser, farmers market, and independent grocer formats were significantly, positively related to obesity prevalence. Upon controlling for urban-rural status and income, results indicated the following formats to be significantly related to county obesity prevalence with a small, positive effect: convenience (p=0.010), accounting for 0.3% of the variance in obesity prevalence; dollar (p=0.005; 0.5% of the variance), and; non-food (p=0.030; 1.3% of the variance) formats. These results align with current literature on consumer behavior at non-traditional formats. For example, consumers’ food and beverage purchases at convenience and dollar stores are documented to be high in saturated fats, added sugars, and sodium. Further, non-food stores (i.e., quick-serve restaurants) often contribute to a large portion of U.S. consumers’ dietary intake and thus poor dietary quality scores. Current food access research investigates grocery/supermarket access and obesity outcomes. These results suggest more research is needed that focuses on non-traditional food store formats. Nutrition interventions within convenience, dollar, and non-food stores, for example, that aim to enhance not only healthy food access but the affordability, convenience, and desirability of nutritious food and beverage options may impact obesity rates in Virginia. More research is warranted utilizing the presented investigative framework in other U.S. and global regions to explore the role and the potential of non-traditional food store formats to prevent and reduce obesity.Keywords: food access, food store format, non-traditional food stores, obesity prevalence
Procedia PDF Downloads 14225 The Prospects of Optimized KOH/Cellulose 'Papers' as Hierarchically Porous Electrode Materials for Supercapacitor Devices
Authors: Dina Ibrahim Abouelamaiem, Ana Jorge Sobrido, Magdalena Titirici, Paul R. Shearing, Daniel J. L. Brett
Abstract:
Global warming and scarcity of fossil fuels have had a radical impact on the world economy and ecosystem. The urgent need for alternative energy sources has hence elicited an extensive research for exploiting efficient and sustainable means of energy conversion and storage. Among various electrochemical systems, supercapacitors attracted significant attention in the last decade due to their high power supply, long cycle life compared to batteries and simple mechanism. Recently, the performance of these devices has drastically improved, as tuning of nanomaterials provided efficient charge and storage mechanisms. Carbon materials, in various forms, are believed to pioneer the next generation of supercapacitors due to their attractive properties that include high electronic conductivities, high surface areas and easy processing and functionalization. Cellulose has eco-friendly attributes that are feasible to replace man-made fibers. The carbonization of cellulose yields carbons, including activated carbon and graphite fibers. Activated carbons successively are the most exploited candidates for supercapacitor electrode materials that can be complemented with pseudocapacitive materials to achieve high energy and power densities. In this work, the optimum functionalization conditions of cellulose have been investigated for supercapacitor electrode materials. The precursor was treated with potassium hydroxide (KOH) at different KOH/cellulose ratios prior to the carbonization process in an inert nitrogen atmosphere at 850 °C. The chalky products were washed, dried and characterized with different techniques including transmission electron microscopy (TEM), x-ray tomography and nitrogen adsorption-desorption isotherms. The morphological characteristics and their effect on the electrochemical performances were investigated in two and three-electrode systems. The KOH/cellulose ratios of 0.5:1 and 1:1 exhibited the highest performances with their unique hierarchal porous network structure, high surface areas and low cell resistances. Both samples acquired the best results in three-electrode systems and coin cells with specific gravimetric capacitances as high as 187 F g-1 and 20 F g-1 at a current density of 1 A g-1 and retention rates of 72% and 70%, respectively. This is attributed to the morphology of the samples that constituted of a well-balanced micro-, meso- and macro-porosity network structure. This study reveals that the electrochemical performance doesn’t solely depend on high surface areas but also an optimum pore size distribution, specifically at low current densities. The micro- and meso-pore contribution to the final pore structure was found to dominate at low KOH loadings, reaching ‘equilibrium’ with macropores at the optimum KOH loading, after which macropores dictate the porous network. The wide range of pore sizes is detrimental for the mobility and penetration of electrolyte ions in the porous structures. These findings highlight the influence of various morphological factors on the double-layer capacitances and high performance rates. In addition, they open a platform for the investigation of the optimized conditions for double-layer capacitance that can be coupled with pseudocapacitive materials to yield higher energy densities and capacities.Keywords: carbon, electrochemical performance, electrodes, KOH/cellulose optimized ratio, morphology, supercapacitor
Procedia PDF Downloads 21924 Using Health Literacy and Medico-Legal Guidance to Improve Restorative Dentistry Patient Information Leaflets
Authors: Hasneet K. Kalsi, Julie K. Kilgariff
Abstract:
Introduction: Within dentistry, the process for gaining informed consent has become more complex. To consent for treatment, patients must understand all reasonable treatment options and associated risks and benefits. Consenting is therefore deeply embedded in health literacy. Patients attending for dental consultation are often presented with an array of information and choices, yet studies show patients recall less than half of the information provided immediately after. Appropriate and comprehensible patient information leaflets (PILs) may be useful aid memories. In 2016 the World Health Organisation set improving health literacy as a global priority. Soon after, Scotland’s 2017-2025 Making it Easier: A Health Literacy Action Plan followed. This project involved the review of Restorative PILs used within Dundee Dental Hospital to assess the Content and Readability. Method: The current PIL on Root Canal Treatment (RCT) was created in 2011. This predates the Montgomery vs. NHS Lanarkshire case, a ruling which significantly impacted dental consenting processes, as well as General Dental Council’s (GDC’s) Standards for the Dental Team and Faculty of General Dental Practice’s Good Practice Guidance on Clinical Examination and Record-Keeping. Current evidence-based guidance, including that stipulated by the GDC, was reviewed. A 20-point Essential Content Checklist was designed to conform to best practice guidance for valid consenting processes. The RCT leaflet was scored against this to ascertain if the content was satisfactory. Having ensured the content satisfied medicolegal requirements, health literacy considerations were reviewed regarding readability. This was assessed using McLaughlin’s Simple Measure of Gobbledygook (SMOG) formula, which identifies school stages that would have to be achieved to comprehend the PIL. The sensitivity of the results to alternative readability methods were assessed. Results: The PIL was not sufficient for modern consenting processes and reflected a suboptimal level of health literacy. Evaluation of the leaflet revealed key content was missing, including information pertaining to risks and benefits. Only five points out of the 20-point checklist were present. The readability score was 16, equivalent to a level 2 in National Adult Literacy Standards/Scottish Credit and Qualification Framework Level 5; 62% of Scottish adults are able to read to this standard. Discussion: Assessment of the leaflet showed it was no longer fit for purpose. Reasons include a lack of pertinent information, a text-heavy leaflet lacking flow, and content errors. The SMOG score indicates a high level of comprehension is required to understand this PIL, which many patients may not possess. A new PIL, compliant with medicolegal and health literacy guidance, was designed with patient-driven checklists, notes spaces for annotations/ questions and areas for clinicians to highlight important case-specific information. It has been tested using the SMOG formula. Conclusion: PILs can be extremely useful. Studies show that interactive use can enhance their effectiveness. PILs should reflect best practice guidance and be understood by patients. The 2020 leaflet designed and implemented aims to fulfill the needs of a modern healthcare system and its service users. It embraces and embeds Scotland’s Health Literacy Action Plan within the consenting process. A review of further leaflets using this model is ongoing.Keywords: consent, health literacy, patient information leaflet, restorative dentistry
Procedia PDF Downloads 14323 Fabrication of Zeolite Modified Cu Doped ZnO Films and Their Response towards Nitrogen Monoxide
Authors: Irmak Karaduman, Tugba Corlu, Sezin Galioglu, Burcu Akata, M. Ali Yildirim, Aytunç Ateş, Selim Acar
Abstract:
Breath analysis represents a promising non-invasive, fast and cost-effective alternative to well-established diagnostic and monitoring techniques such as blood analysis, endoscopy, ultrasonic and tomographic monitoring. Portable, non-invasive, and low-cost breath analysis devices are becoming increasingly desirable for monitoring different diseases, especially asthma. Beacuse of this, NO gas sensing at low concentrations has attracted progressive attention for clinical analysis in asthma. Recently, nanomaterials based sensors are considered to be a promising clinical and laboratory diagnostic tool, because its large surface–to–volume ratio, controllable structure, easily tailored chemical and physical properties, which bring high sensitivity, fast dynamic processand even the increasing specificity. Among various nanomaterials, semiconducting metal oxides are extensively studied gas-sensing materials and are potential sensing elements for breathanalyzer due to their high sensitivity, simple design, low cost and good stability.The sensitivities of metal oxide semiconductor gas sensors can be enhanced by adding noble metals. Doping contents, distribution, and size of metallic or metal oxide catalysts are key parameters for enhancing gas selectivity as well as sensitivity. By manufacturing doping MOS structures, it is possible to develop more efficient sensor sensing layers. Zeolites are perhaps the most widely employed group of silicon-based nanoporous solids. Their well-defined pores of sub nanometric size have earned them the name of molecular sieves, meaning that operation in the size exclusion regime is possible by selecting, among over 170 structures available, the zeolite whose pores allow the pass of the desired molecule, while keeping larger molecules outside.In fact it is selective adsorption, rather than molecular sieving, the mechanism that explains most of the successful gas separations achieved with zeolite membranes. In view of their molecular sieving and selective adsorption properties, it is not surprising that zeolites have found use in a number of works dealing with gas sensing devices. In this study, the Cu doped ZnO nanostructure film was produced by SILAR method and investigated the NO gas sensing properties. To obtain the selectivity of the sample, the gases including CO,NH3,H2 and CH4 were detected to compare with NO. The maximum response is obtained at 85 C for 20 ppb NO gas. The sensor shows high response to NO gas. However, acceptable responses are calculated for CO and NH3 gases. Therefore, there are no responses obtain for H2 and CH4 gases. Enhanced to selectivity, Cu doped ZnO nanostructure film was coated with zeolite A thin film. It is found that the sample possess an acceptable response towards NO hardly respond to CO, NH3, H2 and CH4 at room temperature. This difference in the response can be expressed in terms of differences in the molecular structure, the dipole moment, strength of the electrostatic interaction and the dielectric constant. The as-synthesized thin film is considered to be one of the extremely promising candidate materials in electronic nose applications. This work is supported by The Scientific and Technological Research Council of Turkey (TUBİTAK) under Project No, 115M658 and Gazi University Scientific Research Fund under project no 05/2016-21.Keywords: Cu doped ZnO, electrical characterization, gas sensing, zeolite
Procedia PDF Downloads 28522 Auditory Rehabilitation via an VR Serious Game for Children with Cochlear Implants: Bio-Behavioral Outcomes
Authors: Areti Okalidou, Paul D. Hatzigiannakoglou, Aikaterini Vatou, George Kyriafinis
Abstract:
Young children are nowadays adept at using technology. Hence, computer-based auditory training programs (CBATPs) have become increasingly popular in aural rehabilitation for children with hearing loss and/or with cochlear implants (CI). Yet, their clinical utility for prognostic, diagnostic, and monitoring purposes has not been explored. The purposes of the study were: a) to develop an updated version of the auditory rehabilitation tool for Greek-speaking children with cochlear implants, b) to develop a database for behavioral responses, and c) to compare accuracy rates and reaction times in children differing in hearing status and other medical and demographic characteristics, in order to assess the tool’s clinical utility in prognosis, diagnosis, and progress monitoring. The updated version of the auditory rehabilitation tool was developed on a tablet, retaining the User-Centered Design approach and the elements of the Virtual Reality (VR) serious game. The visual stimuli were farm animals acting in simple game scenarios designed to trigger children’s responses to animal sounds, names, and relevant sentences. Based on an extended version of Erber’s auditory development model, the VR game consisted of six stages, i.e., sound detection, sound discrimination, word discrimination, identification, comprehension of words in a carrier phrase, and comprehension of sentences. A familiarization stage (learning) was set prior to the game. Children’s tactile responses were recorded as correct, false, or impulsive, following a child-dependent set up of a valid delay time after stimulus offset for valid responses. Reaction times were also recorded, and the database was in Εxcel format. The tablet version of the auditory rehabilitation tool was piloted in 22 preschool children with Νormal Ηearing (ΝΗ), which led to improvements. The study took place in clinical settings or at children’s homes. Fifteen children with CI, aged 5;7-12;3 years with post-implantation 0;11-5;1 years used the auditory rehabilitation tool. Eight children with CI were monolingual, two were bilingual and five had additional disabilities. The control groups consisted of 13 children with ΝΗ, aged 2;6-9;11 years. A comparison of both accuracy rates, as percent correct, and reaction times (in sec) was made at each stage, across hearing status, age, and also, within the CI group, based on presence of additional disability and bilingualism. Both monolingual Greek-speaking children with CI with no additional disabilities and hearing peers showed high accuracy rates at all stages, with performances falling above the 3rd quartile. However, children with normal hearing scored higher than the children with CI, especially in the detection and word discrimination tasks. The reaction time differences between the two groups decreased in language-based tasks. Results for children with CI with additional disability or bilingualism varied. Finally, older children scored higher than younger ones in both groups (CI, NH), but larger differences occurred in children with CI. The interactions between familiarization of the software, age, hearing status and demographic characteristics are discussed. Overall, the VR game is a promising tool for tracking the development of auditory skills, as it provides multi-level longitudinal empirical data. Acknowledgment: This work is part of a project that has received funding from the Research Committee of the University of Macedonia under the Basic Research 2020-21 funding programme.Keywords: VR serious games, auditory rehabilitation, auditory training, children with cochlear implants
Procedia PDF Downloads 8921 Opportunities for Reducing Post-Harvest Losses of Cactus Pear (Opuntia Ficus-Indica) to Improve Small-Holder Farmers Income in Eastern Tigray, Northern Ethiopia: Value Chain Approach
Authors: Meron Zenaselase Rata, Euridice Leyequien Abarca
Abstract:
The production of major crops in Northern Ethiopia, especially the Tigray Region, is at subsistence level due to drought, erratic rainfall, and poor soil fertility. Since cactus pear is a drought-resistant plant, it is considered as a lifesaver fruit and a strategy for poverty reduction in a drought-affected area of the region. Despite its contribution to household income and food security in the area, the cactus pear sub-sector is experiencing many constraints with limited attention given to its post-harvest loss management. Therefore, this research was carried out to identify opportunities for reducing post-harvest losses and recommend possible strategies to reduce post-harvest losses, thereby improving production and smallholder’s income. Both probability and non-probability sampling techniques were employed to collect the data. Ganta Afeshum district was selected from Eastern Tigray, and two peasant associations (Buket and Golea) were also selected from the district purposively for being potential in cactus pear production. Simple random sampling techniques were employed to survey 30 households from each of the two peasant associations, and a semi-structured questionnaire was used as a tool for data collection. Moreover, in this research 2 collectors, 2 wholesalers, 1 processor, 3 retailers, 2 consumers were interviewed; and two focus group discussion was also done with 14 key farmers using semi-structured checklist; and key informant interview with governmental and non-governmental organizations were interviewed to gather more information about the cactus pear production, post-harvest losses, the strategies used to reduce the post-harvest losses and suggestions to improve the post-harvest management. To enter and analyze the quantitative data, SPSS version 20 was used, whereas MS-word were used to transcribe the qualitative data. The data were presented using frequency and descriptive tables and graphs. The data analysis was also done using a chain map, correlations, stakeholder matrix, and gross margin. Mean comparisons like ANOVA and t-test between variables were used. The analysis result shows that the present cactus pear value chain involves main actors and supporters. However, there is inadequate information flow and informal market linkages among actors in the cactus pear value chain. The farmer's gross margin is higher when they sell to the processor than sell to collectors. The significant postharvest loss in the cactus pear value chain is at the producer level, followed by wholesalers and retailers. The maximum and minimum volume of post-harvest losses at the producer level is 4212 and 240 kgs per season. The post-harvest loss was caused by limited farmers skill on-farm management and harvesting, low market price, limited market information, absence of producer organization, poor post-harvest handling, absence of cold storage, absence of collection centers, poor infrastructure, inadequate credit access, using traditional transportation system, absence of quality control, illegal traders, inadequate research and extension services and using inappropriate packaging material. Therefore, some of the recommendations were providing adequate practical training, forming producer organizations, and constructing collection centers.Keywords: cactus pear, post-harvest losses, profit margin, value-chain
Procedia PDF Downloads 13020 DH-Students Promoting Underage Asylum Seekers' Oral Health in Finland
Authors: Eeva Wallenius-Nareneva, Tuula Toivanen-Labiad
Abstract:
Background: Oral health promotion event was organised for forty Afghanistan, Iraqi and Bangladeshi underage asylum seekers in Finland. The invitation to arrange this coaching occasion was accepted in the Degree Programme in Oral Hygiene in Metropolia. The personnel in the reception center found the need to improve oral health among the youngsters. The purpose was to strengthen the health literacy of the boys in their oral self-care and to reduce dental fears. The Finnish studies, especially the terminology of oral health was integrated to coaching with the help of interpreters. Cooperative learning was applied. Methods: Oral health was interactively discussed in four study group sessions: 1. The importance of healthy eating habits; - Good and bad diets, - Regular meals, - Acid attack o Xylitol. 2. Oral diseases − connection to general health; - Aetiology of gingivitis, periodontitis and caries, - Harmfulness of smoking 3. Tools and techniques for oral self-care; - Brushing and inter dental cleaning. 4. Sharing earlier dental care experiences; - Cultural differences, - Dental fear, - Regular check-ups. Results: During coaching deficiencies appeared in brushing and inter dental cleaning techniques. Some boys were used to wash their mouth with salt justifying it by salt’s antiseptic properties. Many brushed their teeth by vertical movements. The boys took feedback positively when a demonstration with model jaws revealed the inefficiency of the technique. The advantages of fluoride tooth paste were advised. Dental care procedures were new and frightening for many boys. Finnish dental care system was clarified. The safety and indolence of the treatments and informed consent were highlighted. Video presentations and the dialog lowered substantially the threshold to visit dental clinic. The occasion gave the students means for meeting patients from different cultural and language backgrounds. The information hidden behind the oral health problems of the asylum seekers was valuable. Conclusions: Learning dental care practices used in different cultures is essential for dental professionals. The project was a good start towards multicultural oral health care. More experiences are needed before graduation. Health education themes should be held simple regardless of the target group. The heterogeneity of the group does not pose a problem. Open discussion with questions leading to the theme works well in clarifying the target group’s knowledge level. Sharing own experiences strengthens the sense of equality among the participants and encourages them to express own opinions. Motivational interview method turned out to be successful. In the future coaching occasions must confirm active participation of everyone. This could be realized by dividing the participants to even smaller groups. The different languages impose challenges but they can be solved by using more interpreters. Their presence ensures that everyone understands the issues properly although the use of plain and sign languages are helpful. In further development, it would be crucial to arrange a rehearsal occasion to the same participants in two/three months’ time. This would strengthen the adaption of self-care practices and give the youngsters opportunity to pose more open questions. The students would gain valuable feedback regarding the effectiveness of their work.Keywords: cooperative learning, interactive methods, motivational interviewing, oral health promotion, underage asylum seekers
Procedia PDF Downloads 29019 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers
Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek
Abstract:
Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations
Procedia PDF Downloads 13718 Ensemble Sampler For Infinite-Dimensional Inverse Problems
Authors: Jeremie Coullon, Robert J. Webber
Abstract:
We introduce a Markov chain Monte Carlo (MCMC) sam-pler for infinite-dimensional inverse problems. Our sam-pler is based on the affine invariant ensemble sampler, which uses interacting walkers to adapt to the covariance structure of the target distribution. We extend this ensem-ble sampler for the first time to infinite-dimensional func-tion spaces, yielding a highly efficient gradient-free MCMC algorithm. Because our ensemble sampler does not require gradients or posterior covariance estimates, it is simple to implement and broadly applicable. In many Bayes-ian inverse problems, Markov chain Monte Carlo (MCMC) meth-ods are needed to approximate distributions on infinite-dimensional function spaces, for example, in groundwater flow, medical imaging, and traffic flow. Yet designing efficient MCMC methods for function spaces has proved challenging. Recent gradi-ent-based MCMC methods preconditioned MCMC methods, and SMC methods have improved the computational efficiency of functional random walk. However, these samplers require gradi-ents or posterior covariance estimates that may be challenging to obtain. Calculating gradients is difficult or impossible in many high-dimensional inverse problems involving a numerical integra-tor with a black-box code base. Additionally, accurately estimating posterior covariances can require a lengthy pilot run or adaptation period. These concerns raise the question: is there a functional sampler that outperforms functional random walk without requir-ing gradients or posterior covariance estimates? To address this question, we consider a gradient-free sampler that avoids explicit covariance estimation yet adapts naturally to the covariance struc-ture of the sampled distribution. This sampler works by consider-ing an ensemble of walkers and interpolating and extrapolating between walkers to make a proposal. This is called the affine in-variant ensemble sampler (AIES), which is easy to tune, easy to parallelize, and efficient at sampling spaces of moderate dimen-sionality (less than 20). The main contribution of this work is to propose a functional ensemble sampler (FES) that combines func-tional random walk and AIES. To apply this sampler, we first cal-culate the Karhunen–Loeve (KL) expansion for the Bayesian prior distribution, assumed to be Gaussian and trace-class. Then, we use AIES to sample the posterior distribution on the low-wavenumber KL components and use the functional random walk to sample the posterior distribution on the high-wavenumber KL components. Alternating between AIES and functional random walk updates, we obtain our functional ensemble sampler that is efficient and easy to use without requiring detailed knowledge of the target dis-tribution. In past work, several authors have proposed splitting the Bayesian posterior into low-wavenumber and high-wavenumber components and then applying enhanced sampling to the low-wavenumber components. Yet compared to these other samplers, FES is unique in its simplicity and broad applicability. FES does not require any derivatives, and the need for derivative-free sam-plers has previously been emphasized. FES also eliminates the requirement for posterior covariance estimates. Lastly, FES is more efficient than other gradient-free samplers in our tests. In two nu-merical examples, we apply FES to challenging inverse problems that involve estimating a functional parameter and one or more scalar parameters. We compare the performance of functional random walk, FES, and an alternative derivative-free sampler that explicitly estimates the posterior covariance matrix. We conclude that FES is the fastest available gradient-free sampler for these challenging and multimodal test problems.Keywords: Bayesian inverse problems, Markov chain Monte Carlo, infinite-dimensional inverse problems, dimensionality reduction
Procedia PDF Downloads 15417 Computational, Human, and Material Modalities: An Augmented Reality Workflow for Building form Found Textile Structures
Authors: James Forren
Abstract:
This research paper details a recent demonstrator project in which digital form found textile structures were built by human craftspersons wearing augmented reality (AR) head-worn displays (HWDs). The project utilized a wet-state natural fiber / cementitious matrix composite to generate minimal bending shapes in tension which, when cured and rotated, performed as minimal-bending compression members. The significance of the project is that it synthesizes computational structural simulations with visually guided handcraft production. Computational and physical form-finding methods with textiles are well characterized in the development of architectural form. One difficulty, however, is physically building computer simulations: often requiring complicated digital fabrication workflows. However, AR HWDs have been used to build a complex digital form from bricks, wood, plastic, and steel without digital fabrication devices. These projects utilize, instead, the tacit knowledge motor schema of the human craftsperson. Computational simulations offer unprecedented speed and performance in solving complex structural problems. Human craftspersons possess highly efficient complex spatial reasoning motor schemas. And textiles offer efficient form-generating possibilities for individual structural members and overall structural forms. This project proposes that the synthesis of these three modalities of structural problem-solving – computational, human, and material - may not only develop efficient structural form but offer further creative potentialities when the respective intelligence of each modality is productively leveraged. The project methodology pertains to its three modalities of production: 1) computational, 2) human, and 3) material. A proprietary three-dimensional graphic statics simulator generated a three-legged arch as a wireframe model. This wireframe was discretized into nine modules, three modules per leg. Each module was modeled as a woven matrix of one-inch diameter chords. And each woven matrix was transmitted to a holographic engine running on HWDs. Craftspersons wearing the HWDs then wove wet cementitious chords within a simple falsework frame to match the minimal bending form displayed in front of them. Once the woven components cured, they were demounted from the frame. The components were then assembled into a full structure using the holographically displayed computational model as a guide. The assembled structure was approximately eighteen feet in diameter and ten feet in height and matched the holographic model to under an inch of tolerance. The construction validated the computational simulation of the minimal bending form as it was dimensionally stable for a ten-day period, after which it was disassembled. The demonstrator illustrated the facility with which computationally derived, a structurally stable form could be achieved by the holographically guided, complex three-dimensional motor schema of the human craftsperson. However, the workflow traveled unidirectionally from computer to human to material: failing to fully leverage the intelligence of each modality. Subsequent research – a workshop testing human interaction with a physics engine simulation of string networks; and research on the use of HWDs to capture hand gestures in weaving seeks to develop further interactivity with rope and chord towards a bi-directional workflow within full-scale building environments.Keywords: augmented reality, cementitious composites, computational form finding, textile structures
Procedia PDF Downloads 17516 [Keynote Talk]: Bioactive Cyclic Dipeptides of Microbial Origin in Discovery of Cytokine Inhibitors
Authors: Sajeli A. Begum, Ameer Basha, Kirti Hira, Rukaiyya Khan
Abstract:
Cyclic dipeptides are simple diketopiperazine derivatives being investigated by several scientists for their biological effects which include anticancer, antimicrobial, haematological, anticonvulsant, immunomodulatory effect, etc. They are potentially active microbial metabolites having been synthesized too, for developing into drug candidates. Cultures of Pseudomonas species have earlier been reported to produce cyclic dipeptides, helping in quorum sensing signals and bacterial–host colonization phenomena during infections, causing cell anti-proliferation and immunosuppression. Fluorescing Pseudomonas species have been identified to secrete lipid derivatives, peptides, pyrroles, phenazines, indoles, aminoacids, pterines, pseudomonic acids and some antibiotics. In the present work, results of investigation on the cyclic dipeptide metabolites secreted by the culture broth of Pseudomonas species as potent pro-inflammatory cytokine inhibitors are discussed. The bacterial strain was isolated from the rhizospheric soil of groundnut crop and identified as Pseudomonas aeruginosa by 16S rDNA sequence (GenBank Accession No. KT625586). Culture broth of this strain was prepared by inoculating into King’s B broth and incubating at 30 ºC for 7 days. The ethyl acetate extract of culture broth was prepared and lyophilized to get a dry residue (EEPA). Lipopolysaccharide (LPS)-induced ELISA assay proved the inhibition of tumor necrosis factor-alpha (TNF-α) secretion in culture supernatant of RAW 264.7 cells by EEPA (IC50 38.8 μg/mL). The effect of oral administration of EEPA on plasma TNF-α level in rats was tested by ELISA kit. The LPS mediated plasma TNF-α level was reduced to 45% with 125 mg/kg dose of EEPA. Isolation of the chemical constituents of EEPA through column chromatography yielded ten cyclic dipeptides, which were characterized using nuclear magnetic resonance and mass spectroscopic techniques. These cyclic dipeptides are biosynthesized in microorganisms by multifunctional assembly of non-ribosomal peptide synthases and cyclic dipeptide synthase. Cyclo (Gly-L-Pro) was found to be more potentially (IC50 value 4.5 μg/mL) inhibiting TNF-α production followed by cyclo (trans-4-hydroxy-L-Pro-L-Phe) (IC50 value 14.2 μg/mL) and the effect was equal to that of standard immunosuppressant drug, prednisolone. Further, the effect was analyzed by determining mRNA expression of TNF-α in LPS-stimulated RAW 264.7 macrophages using quantitative real-time reverse transcription polymerase chain reaction. EEPA and isolated cyclic dipeptides demonstrated diminution of TNF-α mRNA expression levels in a dose-dependent manner under the tested conditions. Also, they were found to control the expression of other pro-inflammatory cytokines like IL-1β and IL-6, when tested through their mRNA expression levels in LPS-stimulated RAW 264.7 macrophages under LPS-stimulated conditions. In addition, significant inhibition effect was found on Nitric oxide production. Further all the compounds exhibited weak toxicity to LPS-induced RAW 264.7 cells. Thus the outcome of the study disclosed the effectiveness of EEPA and the isolated cyclic dipeptides in down-regulating key cytokines involved in pathophysiology of autoimmune diseases.In another study led by the investigators, microbial cyclic dipeptides were found to exhibit excellent antimicrobial effect against Fusarium moniliforme which is an important causative agent of Sorghum grain mold disease. Thus, cyclic dipeptides are emerging small molecular drug candidates for various autoimmune diseases.Keywords: cyclic dipeptides, cytokines, Fusarium moniliforme, Pseudomonas, TNF-alpha
Procedia PDF Downloads 21115 Prospects of Acellular Organ Scaffolds for Drug Discovery
Authors: Inna Kornienko, Svetlana Guryeva, Natalia Danilova, Elena Petersen
Abstract:
Drug toxicity often goes undetected until clinical trials, the most expensive and dangerous phase of drug development. Both human cell culture and animal studies have limitations that cannot be overcome by improvements in drug testing protocols. Tissue engineering is an emerging alternative approach to creating models of human malignant tumors for experimental oncology, personalized medicine, and drug discovery studies. This new generation of bioengineered tumors provides an opportunity to control and explore the role of every component of the model system including cell populations, supportive scaffolds, and signaling molecules. An area that could greatly benefit from these models is cancer research. Recent advances in tissue engineering demonstrated that decellularized tissue is an excellent scaffold for tissue engineering. Decellularization of donor organs such as heart, liver, and lung can provide an acellular, naturally occurring three-dimensional biologic scaffold material that can then be seeded with selected cell populations. Preliminary studies in animal models have provided encouraging results for the proof of concept. Decellularized Organs preserve organ microenvironment, which is critical for cancer metastasis. Utilizing 3D tumor models results greater proximity of cell culture morphological characteristics in a model to its in vivo counterpart, allows more accurate simulation of the processes within a functioning tumor and its pathogenesis. 3D models allow study of migration processes and cell proliferation with higher reliability as well. Moreover, cancer cells in a 3D model bear closer resemblance to living conditions in terms of gene expression, cell surface receptor expression, and signaling. 2D cell monolayers do not provide the geometrical and mechanical cues of tissues in vivo and are, therefore, not suitable to accurately predict the responses of living organisms. 3D models can provide several levels of complexity from simple monocultures of cancer cell lines in liquid environment comprised of oxygen and nutrient gradients and cell-cell interaction to more advanced models, which include co-culturing with other cell types, such as endothelial and immune cells. Following this reasoning, spheroids cultivated from one or multiple patient-derived cell lines can be utilized to seed the matrix rather than monolayer cells. This approach furthers the progress towards personalized medicine. As an initial step to create a new ex vivo tissue engineered model of a cancer tumor, optimized protocols have been designed to obtain organ-specific acellular matrices and evaluate their potential as tissue engineered scaffolds for cultures of normal and tumor cells. Decellularized biomatrix was prepared from animals’ kidneys, urethra, lungs, heart, and liver by two decellularization methods: perfusion in a bioreactor system and immersion-agitation on an orbital shaker with the use of various detergents (SDS, Triton X-100) in different concentrations and freezing. Acellular scaffolds and tissue engineered constructs have been characterized and compared using morphological methods. Models using decellularized matrix have certain advantages, such as maintaining native extracellular matrix properties and biomimetic microenvironment for cancer cells; compatibility with multiple cell types for cell culture and drug screening; utilization to culture patient-derived cells in vitro to evaluate different anticancer therapeutics for developing personalized medicines.Keywords: 3D models, decellularization, drug discovery, drug toxicity, scaffolds, spheroids, tissue engineering
Procedia PDF Downloads 30014 The Use of Rule-Based Cellular Automata to Track and Forecast the Dispersal of Classical Biocontrol Agents at Scale, with an Application to the Fopius arisanus Fruit Fly Parasitoid
Authors: Agboka Komi Mensah, John Odindi, Elfatih M. Abdel-Rahman, Onisimo Mutanga, Henri Ez Tonnang
Abstract:
Ecosystems are networks of organisms and populations that form a community of various species interacting within their habitats. Such habitats are defined by abiotic and biotic conditions that establish the initial limits to a population's growth, development, and reproduction. The habitat’s conditions explain the context in which species interact to access resources such as food, water, space, shelter, and mates, allowing for feeding, dispersal, and reproduction. Dispersal is an essential life-history strategy that affects gene flow, resource competition, population dynamics, and species distributions. Despite the importance of dispersal in population dynamics and survival, understanding the mechanism underpinning the dispersal of organisms remains challenging. For instance, when an organism moves into an ecosystem for survival and resource competition, its progression is highly influenced by extrinsic factors such as its physiological state, climatic variables and ability to evade predation. Therefore, greater spatial detail is necessary to understand organism dispersal dynamics. Understanding organisms dispersal can be addressed using empirical and mechanistic modelling approaches, with the adopted approach depending on the study's purpose Cellular automata (CA) is an example of these approaches that have been successfully used in biological studies to analyze the dispersal of living organisms. Cellular automata can be briefly described as occupied cells by an individual that evolves based on proper decisions based on a set of neighbours' rules. However, in the ambit of modelling individual organisms dispersal at the landscape scale, we lack user friendly tools that do not require expertise in mathematical models and computing ability; such as a visual analytics framework for tracking and forecasting the dispersal behaviour of organisms. The term "visual analytics" (VA) describes a semiautomated approach to electronic data processing that is guided by users who can interact with data via an interface. Essentially, VA converts large amounts of quantitative or qualitative data into graphical formats that can be customized based on the operator's needs. Additionally, this approach can be used to enhance the ability of users from various backgrounds to understand data, communicate results, and disseminate information across a wide range of disciplines. To support effective analysis of the dispersal of organisms at the landscape scale, we therefore designed Pydisp which is a free visual data analytics tool for spatiotemporal dispersal modeling built in Python. Its user interface allows users to perform a quick and interactive spatiotemporal analysis of species dispersal using bioecological and climatic data. Pydisp enables reuse and upgrade through the use of simple principles such as Fuzzy cellular automata algorithms. The potential of dispersal modeling is demonstrated in a case study by predicting the dispersal of Fopius arisanus (Sonan), endoparasitoids to control Bactrocera dorsalis (Hendel) (Diptera: Tephritidae) in Kenya. The results obtained from our example clearly illustrate the parasitoid's dispersal process at the landscape level and confirm that dynamic processes in an agroecosystem are better understood when designed using mechanistic modelling approaches. Furthermore, as demonstrated in the example, the built software is highly effective in portraying the dispersal of organisms despite the unavailability of detailed data on the species dispersal mechanisms.Keywords: cellular automata, fuzzy logic, landscape, spatiotemporal
Procedia PDF Downloads 7713 Exploiting Charges on Medicinal Synthetic Aluminum Magnesium Silicate's {Al₄ (SiO₄)₃ + 3Mg₂SiO₄ → 2Al₂Mg₃ (SiO₄)₃} Nanoparticles in Treating Viral Diseases, Tumors, Antimicrobial Resistant Infections
Authors: M. C. O. Ezeibe, F. I. O. Ezeibe
Abstract:
Reasons viral diseases (including AI, HIV/AIDS, and COVID-19), tumors (including Cancers and Prostrate enlargement), and antimicrobial-resistant infections (AMR) are difficult to cure are features of the pathogens which normal cells do not have or need (biomedical markers) have not been identified; medicines that can counter the markers have not been invented; strategies and mechanisms for their treatments have not been developed. When cells become abnormal, they acquire negative electrical charges, and viruses are either positively charged or negatively charged, while normal cells remain neutral (without electrical charges). So, opposite charges' electrostatic attraction is a treatment mechanism for viral diseases and tumors. Medicines that have positive electrical charges would mop abnormal (infected and tumor) cells and DNA viruses (negatively charged), while negatively charged medicines would mop RNA viruses (positively charged). Molecules of Aluminum-magnesium silicate [AMS: Al₂Mg₃ (SiO₄)₃], an approved medicine and pharmaceutical stabilizing agent, consist of nanoparticles which have both positive electrically charged ends and negative electrically charged ends. The very small size (0.96 nm) of the nanoparticles allows them to reach all cells in every organ. By stabilizing antimicrobials, AMS reduces the rate at which the body metabolizes them so that they remain at high concentrations for extended periods. When drugs remain at high concentrations for longer periods, their efficacies improve. Again, nanoparticles enhance the delivery of medicines to effect targets. Both remaining at high concentrations for longer periods and better delivery to effect targets improve efficacy and make lower doses achieve desired effects so that side effects of medicines are reduced to allow the immunity of patients to be enhanced. Silicates also enhance the immune responses of treated patients. Improving antimicrobial efficacies and enhancing patients` immunity terminate infections so that none remains that could develop resistance. Some countries do not have natural deposits of AMS, but they may have Aluminum silicate (AS: Al₄ (SiO₄)₃) and Magnesium silicate (MS: Mg₂SiO₄), which are also approved medicines. So, AS and MS were used to formulate an AMS-brand, named Medicinal synthetic AMS {Al₄ (SiO₄)₃ + 3Mg₂SiO₄ → 2Al₂Mg₃ (SiO₄)₃}. To overcome the challenge of AMS, AS, and MS being un-absorbable, Dextrose monohydrate is incorporated in MSAMS-formulations for the simple sugar to convey the electrically charged nanoparticles into blood circulation by the principle of active transport so that MSAMS-antimicrobial formulations function systemically. In vitro, MSAMS reduced (P≤0.05) titers of viruses, including Avian influenza virus and HIV. When used to treat virus-infected animals, it cured Newcastle disease and Infectious bursa disease of chickens, Parvovirus disease of dogs, and Peste des petits ruminants disease of sheep and goats. A number of HIV/AIDS patients treated with it have been reported to become HIV-negative (antibody and antigen). COVID-19 patients are also reported to recover and test virus negative when treated with MSAMS. PSA titers of prostate cancer/enlargement patients normalize (≤4) following treatment with MSAMS. MSAMS has also potentiated ampicillin trihydrate, sulfadimidin, cotrimoxazole, piparazine citrate and chloroquine phosphate to achieve ≥ 95 % infection-load reductions (AMR-prevention). At 75 % of doses of ampicillin, cotrimoxazole, and streptomycin, supporting MSAMS-formulations' treatments with antioxidants led to the termination of even already resistant infections.Keywords: electrical charges, viruses, abnormal cells, aluminum-magnesium silicate
Procedia PDF Downloads 6312 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 41