Search results for: school development project
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20713

Search results for: school development project

103 A Comprehensive Approach to Create ‘Livable Streets’ in the Mixed Land Use of Urban Neighborhoods: A Case Study of Bangalore Street

Authors: K. C. Tanuja, Mamatha P. Raj

Abstract:

"People have always lived on streets. They have been the places where children first learned about the world, where neighbours met, the social centres of towns and cities, the rallying points for revolts, the scenes of repression. The street has always been the scene of this conflict, between living and access, between resident and traveller, between street life and the threat of death.” Livable Streets by Donald Appleyard. Urbanisation is happening rapidly all over the world. As population increasing in the urban settlements, its required to provide quality of life to all the inhabitants who live in. Urban design is a place making strategic planning. Urban design principles promote visualising any place environmentally, socially and economically viable. Urban design strategies include building mass, transit development, economic viability and sustenance and social aspects. Cities are wonderful inventions of diversity- People, things, activities, ideas and ideologies. Cities should be smarter and adjustable to present technology and intelligent system. Streets represent the community in terms of social and physical aspects. Streets are an urban form that responds to many issues and are central to urban life. Streets are for livability, safety, mobility, place of interest, economic opportunity, balancing the ecology and for mass transit. Urban streets are places where people walk, shop, meet and engage in different types of social and recreational activities which make urban community enjoyable. Streets knit the urban fabric of activities. Urban streets become livable with the introduction of social network enhancing the pedestrian character by providing good design features which in turn should achieve the minimal impact of motor vehicle use on pedestrians. Livable streets are the spatial definition to the public right of way on urban streets. Streets in India have traditionally been the public spaces where social life happened or created from ages. Streets constitute the urban public realm where people congregate, celebrate and interact. Streets are public places that can promote social interaction, active living and community identity. Streets as potential contributors to a better living environment, knitting together the urban fabric of people and places that make up a community. Livable streets or complete streets are making our streets as social places, roadways and sidewalks accessible, safe, efficient and useable for all people. The purpose of this paper is to understand the concept of livable street and parameters of livability on urban streets. Streets to be designed as the pedestrians are the main users and create spaces and furniture for social interaction which serves for the needs of the people of all ages and abilities. The problems of streets like congestion due to width of the street, traffic movement and adjacent land use and type of movement need to be redesigned and improve conditions defining the clear movement path for vehicles and pedestrians. Well-designed spatial qualities of street enhances the street environment, livability and then achieves quality of life to the pedestrians. A methodology been derived to arrive at the typologies in street design after analysis of existing situation and comparing with livable standards. It was Donald Appleyard‟s Livable Streets laid out the social effects on streets creating the social network to achieve Livable Streets.

Keywords: livable streets, social interaction, pedestrian use, urban design

Procedia PDF Downloads 119
102 The Use of Artificial Intelligence in the Context of a Space Traffic Management System: Legal Aspects

Authors: George Kyriakopoulos, Photini Pazartzis, Anthi Koskina, Crystalie Bourcha

Abstract:

The need for securing safe access to and return from outer space, as well as ensuring the viability of outer space operations, maintains vivid the debate over the promotion of organization of space traffic through a Space Traffic Management System (STM). The proliferation of outer space activities in recent years as well as the dynamic emergence of the private sector has gradually resulted in a diverse universe of actors operating in outer space. The said developments created an increased adverse impact on outer space sustainability as the case of the growing number of space debris clearly demonstrates. The above landscape sustains considerable threats to outer space environment and its operators that need to be addressed by a combination of scientific-technological measures and regulatory interventions. In this context, recourse to recent technological advancements and, in particular, to Artificial Intelligence (AI) and machine learning systems, could achieve exponential results in promoting space traffic management with respect to collision avoidance as well as launch and re-entry procedures/phases. New technologies can support the prospects of a successful space traffic management system at an international scale by enabling, inter alia, timely, accurate and analytical processing of large data sets and rapid decision-making, more precise space debris identification and tracking and overall minimization of collision risks and reduction of operational costs. What is more, a significant part of space activities (i.e. launch and/or re-entry phase) takes place in airspace rather than in outer space, hence the overall discussion also involves the highly developed, both technically and legally, international (and national) Air Traffic Management System (ATM). Nonetheless, from a regulatory perspective, the use of AI for the purposes of space traffic management puts forward implications that merit particular attention. Key issues in this regard include the delimitation of AI-based activities as space activities, the designation of the applicable legal regime (international space or air law, national law), the assessment of the nature and extent of international legal obligations regarding space traffic coordination, as well as the appropriate liability regime applicable to AI-based technologies when operating for space traffic coordination, taking into particular consideration the dense regulatory developments at EU level. In addition, the prospects of institutionalizing international cooperation and promoting an international governance system, together with the challenges of establishment of a comprehensive international STM regime are revisited in the light of intervention of AI technologies. This paper aims at examining regulatory implications advanced by the use of AI technology in the context of space traffic management operations and its key correlating concepts (SSA, space debris mitigation) drawing in particular on international and regional considerations in the field of STM (e.g. UNCOPUOS, International Academy of Astronautics, European Space Agency, among other actors), the promising advancements of the EU approach to AI regulation and, last but not least, national approaches regarding the use of AI in the context of space traffic management, in toto. Acknowledgment: The present work was co-funded by the European Union and Greek national funds through the Operational Program "Human Resources Development, Education and Lifelong Learning " (NSRF 2014-2020), under the call "Supporting Researchers with an Emphasis on Young Researchers – Cycle B" (MIS: 5048145).

Keywords: artificial intelligence, space traffic management, space situational awareness, space debris

Procedia PDF Downloads 215
101 Electroactive Ferrocenyl Dendrimers as Transducers for Fabrication of Label-Free Electrochemical Immunosensor

Authors: Sudeshna Chandra, Christian Gäbler, Christian Schliebe, Heinrich Lang

Abstract:

Highly branched dendrimers provide structural homogeneity, controlled composition, comparable size to biomolecules, internal porosity and multiple functional groups for conjugating reactions. Electro-active dendrimers containing multiple redox units have generated great interest in their use as electrode modifiers for development of biosensors. The electron transfer between the redox-active dendrimers and the biomolecules play a key role in developing a biosensor. Ferrocenes have multiple and electrochemically equivalent redox units that can act as electron “pool” in a system. The ferrocenyl-terminated polyamidoamine dendrimer is capable of transferring multiple numbers of electrons under the same applied potential. Therefore, they can be used for dual purposes: one in building a film over the electrode for immunosensors and the other for immobilizing biomolecules for sensing. Electrochemical immunosensor, thus developed, exhibit fast and sensitive analysis, inexpensive and involve no prior sample pre-treatment. Electrochemical amperometric immunosensors are even more promising because they can achieve a very low detection limit with high sensitivity. Detection of the cancer biomarkers at an early stage can provide crucial information for foundational research of life science, clinical diagnosis and prevention of disease. Elevated concentration of biomarkers in body fluid is an early indication of some type of cancerous disease and among all the biomarkers, IgG is the most common and extensively used clinical cancer biomarkers. We present an IgG (=immunoglobulin) electrochemical immunosensor using a newly synthesized redox-active ferrocenyl dendrimer of generation 2 (G2Fc) as glassy carbon electrode material for immobilizing the antibody. The electrochemical performance of the modified electrodes was assessed in both aqueous and non-aqueous media using varying scan rates to elucidate the reaction mechanism. The potential shift was found to be higher in an aqueous electrolyte due to presence of more H-bond which reduced the electrostatic attraction within the amido groups of the dendrimers. The cyclic voltammetric studies of the G2Fc-modified GCE in 0.1 M PBS solution of pH 7.2 showed a pair of well-defined redox peaks. The peak current decreased significantly with the immobilization of the anti-goat IgG. After the immunosensor is blocked with BSA, a further decrease in the peak current was observed due to the attachment of the protein BSA to the immunosensor. A significant decrease in the current signal of the BSA/anti-IgG/G2Fc/GCE was observed upon immobilizing IgG which may be due to the formation of immune-conjugates that blocks the tunneling of mass and electron transfer. The current signal was found to be directly related to the amount of IgG captured on the electrode surface. With increase in the concentration of IgG, there is a formation of an increasing amount of immune-conjugates that decreased the peak current. The incubation time and concentration of the antibody was optimized for better analytical performance of the immunosensor. The developed amperometric immunosensor is sensitive to IgG concentration as low as 2 ng/mL. Tailoring of redox-active dendrimers provides enhanced electroactivity to the system and enlarges the sensor surface for binding the antibodies. It may be assumed that both electron transfer and diffusion contribute to the signal transformation between the dendrimers and the antibody.

Keywords: ferrocenyl dendrimers, electrochemical immunosensors, immunoglobulin, amperometry

Procedia PDF Downloads 304
100 The Optimization of Topical Antineoplastic Therapy Using Controlled Release Systems Based on Amino-functionalized Mesoporous Silica

Authors: Lacramioara Ochiuz, Aurelia Vasile, Iulian Stoleriu, Cristina Ghiciuc, Maria Ignat

Abstract:

Topical administration of chemotherapeutic agents (eg. carmustine, bexarotene, mechlorethamine etc.) in local treatment of cutaneous T-cell lymphoma (CTCL) is accompanied by multiple side effects, such as contact hypersensitivity, pruritus, skin atrophy or even secondary malignancies. A known method of reducing the side effects of anticancer agent is the development of modified drug release systems using drug incapsulation in biocompatible nanoporous inorganic matrices, such as mesoporous MCM-41 silica. Mesoporous MCM-41 silica is characterized by large specific surface, high pore volume, uniform porosity, and stable dispersion in aqueous medium, excellent biocompatibility, in vivo biodegradability and capacity to be functionalized with different organic groups. Therefore, MCM-41 is an attractive candidate for a wide range of biomedical applications, such as controlled drug release, bone regeneration, protein immobilization, enzymes, etc. The main advantage of this material lies in its ability to host a large amount of the active substance in uniform pore system with adjustable size in a mesoscopic range. Silanol groups allow surface controlled functionalization leading to control of drug loading and release. This study shows (I) the amino-grafting optimization of mesoporous MCM-41 silica matrix by means of co-condensation during synthesis and post-synthesis using APTES (3-aminopropyltriethoxysilane); (ii) loading the therapeutic agent (carmustine) obtaining a modified drug release systems; (iii) determining the profile of in vitro carmustine release from these systems; (iv) assessment of carmustine release kinetics by fitting on four mathematical models. Obtained powders have been described in terms of structure, texture, morphology thermogravimetric analysis. The concentration of the therapeutic agent in the dissolution medium has been determined by HPLC method. In vitro dissolution tests have been done using cell Enhancer in a 12 hours interval. Analysis of carmustine release kinetics from mesoporous systems was made by fitting to zero-order model, first-order model Higuchi model and Korsmeyer-Peppas model, respectively. Results showed that both types of highly ordered mesoporous silica (amino grafted by co-condensation process or post-synthesis) are thermally stable in aqueous medium. In what regards the degree of loading and efficiency of loading with the therapeutic agent, there has been noticed an increase of around 10% in case of co-condensation method application. This result shows that direct co-condensation leads to even distribution of amino groups on the pore walls while in case of post-synthesis grafting many amino groups are concentrated near the pore opening and/or on external surface. In vitro dissolution tests showed an extended carmustine release (more than 86% m/m) both from systems based on silica functionalized directly by co-condensation and after synthesis. Assessment of carmustine release kinetics revealed a release through diffusion from all studied systems as a result of fitting to Higuchi model. The results of this study proved that amino-functionalized mesoporous silica may be used as a matrix for optimizing the anti-cancer topical therapy by loading carmustine and developing prolonged-release systems.

Keywords: carmustine, silica, controlled, release

Procedia PDF Downloads 228
99 Phytochemical Analysis and in vitro Biological Activities of an Ethyl Acetate Extract from the Peel of Punica granatum L. var. Dente di Cavallo

Authors: Silvia Di Giacomo, Marcello Locatelli, Simone Carradori, Francesco Cacciagrano, Chiara Toniolo, Gabriela Mazzanti, Luisa Mannina, Stefania Cesa, Antonella Di Sotto

Abstract:

Hyperglycemia represents the main pathogenic factor in the development of diabetes complications and has been found associated with mitochondrial dysfunction and oxidative stress, which in turn increase cell dysfunction. Therefore, counteract oxidative species appears to be a suitable strategy for preventing the hyperglycemia-induce cell damage and support the pharmacotherapy of diabetes and metabolic diseases. Antidiabetic potential of many food sources has been linked to the presence of polyphenolic metabolites, particularly flavonoids such as quercetin and its glycosylated form rutin. In line with this evidence, in the present study, we assayed the potential anti-hyperglycemic activity of an ethyl acetate extract from the peel of Punica granatum L. var. Dente di Cavallo (PGE), a fruit well known to traditional medicine for the beneficial properties of its edible juice. The effect of the extract on the glucidic metabolism has been evaluated by assessing its ability to inhibit α-amylase and α-glucosidase, two digestive enzymes responsible for the hydrolysis of dietary carbohydrates: their inhibition can delay the carbohydrate digestion and reduce glucose absorption, thus representing an important strategy for the management of hyperglycemia. Also, the PGE ability to block the release of advanced glycated end-products (AGEs), whose accumulation is known to be responsible for diabetic vascular complications, was studied. The iron-reducing and chelating activities, which are the primary mechanisms by which AGE inhibitors stop their metal-catalyzed formation, were evaluated as possible antioxidant mechanisms. At last, the phenolic content of PGE was characterized by chromatographic and spectrophotometric methods. Our results displayed the ability of PGE to inhibit α-amylase enzyme with a similar potency to the positive control: the IC₅₀ values were 52.2 (CL 27.7 - 101.2) µg/ml and 35.6 (CL 22.8 - 55.5) µg/ml for acarbose and PGE, respectively. PGE also inhibited the α-glucosidase enzyme with about a 25 higher potency than the positive controls of acarbose and quercetin. Furthermore, the extract exhibited ferrous and ferric ion chelating ability, with a maximum effect of 82.1% and 80.6% at a concentration of 250 µg/ml respectively, and reducing properties, reaching the maximum effect of 80.5% at a concentration of 10 µg/ml. At last, PGE was found able to inhibit the AGE production (maximum inhibition of 82.2% at the concentration of 1000 µg/ml), although with lower potency with respect to the positive control rutin. The phytochemical analysis of PGE displayed the presence of high levels of total polyphenols, tannins, and flavonoids, among which ellagic acid, gallic acid and catechin were identified. Altogether these data highlight the ability of PGE to control the carbohydrate metabolism at different levels, both by inhibiting the metabolic enzymes and by affecting the AGE formation likely by chelating mechanisms. It is also noteworthy that peel from pomegranate, although being a waste of juice production, can be reviewed as a nutraceutical source. In conclusion, present results suggest the possible role of PGE as a remedy for preventing hyperglycemia complications and encourage further in vivo studies.

Keywords: anti-hyperglycemic activity, antioxidant properties, nutraceuticals, polyphenols, pomegranate

Procedia PDF Downloads 154
98 Membrane Permeability of Middle Molecules: A Computational Chemistry Approach

Authors: Sundaram Arulmozhiraja, Kanade Shimizu, Yuta Yamamoto, Satoshi Ichikawa, Maenaka Katsumi, Hiroaki Tokiwa

Abstract:

Drug discovery is shifting from small molecule based drugs targeting local active site to middle molecules (MM) targeting large, flat, and groove-shaped binding sites, for example, protein-protein interface because at least half of all targets assumed to be involved in human disease have been classified as “difficult to drug” with traditional small molecules. Hence, MMs such as peptides, natural products, glycans, nucleic acids with various high potent bioactivities become important targets for drug discovery programs in the recent years as they could be used for ‘undruggable” intracellular targets. Cell membrane permeability is one of the key properties of pharmacodynamically active MM drug compounds and so evaluating this property for the potential MMs is crucial. Computational prediction for cell membrane permeability of molecules is very challenging; however, recent advancement in the molecular dynamics simulations help to solve this issue partially. It is expected that MMs with high membrane permeability will enable drug discovery research to expand its borders towards intracellular targets. Further to understand the chemistry behind the permeability of MMs, it is necessary to investigate their conformational changes during the permeation through membrane and for that their interactions with the membrane field should be studied reliably because these interactions involve various non-bonding interactions such as hydrogen bonding, -stacking, charge-transfer, polarization dispersion, and non-classical weak hydrogen bonding. Therefore, parameters-based classical mechanics calculations are hardly sufficient to investigate these interactions rather, quantum mechanical (QM) calculations are essential. Fragment molecular orbital (FMO) method could be used for such purpose as it performs ab initio QM calculations by dividing the system into fragments. The present work is aimed to study the cell permeability of middle molecules using molecular dynamics simulations and FMO-QM calculations. For this purpose, a natural compound syringolin and its analogues were considered in this study. Molecular simulations were performed using NAMD and Gromacs programs with CHARMM force field. FMO calculations were performed using the PAICS program at the correlated Resolution-of-Identity second-order Moller Plesset (RI-MP2) level with the cc-pVDZ basis set. The simulations clearly show that while syringolin could not permeate the membrane, its selected analogues go through the medium in nano second scale. These correlates well with the existing experimental evidences that these syringolin analogues are membrane-permeable compounds. Further analyses indicate that intramolecular -stacking interactions in the syringolin analogues influenced their permeability positively. These intramolecular interactions reduce the polarity of these analogues so that they could permeate the lipophilic cell membrane. Conclusively, the cell membrane permeability of various middle molecules with potent bioactivities is efficiently studied using molecular dynamics simulations. Insight of this behavior is thoroughly investigated using FMO-QM calculations. Results obtained in the present study indicate that non-bonding intramolecular interactions such as hydrogen-bonding and -stacking along with the conformational flexibility of MMs are essential for amicable membrane permeation. These results are interesting and are nice example for this theoretical calculation approach that could be used to study the permeability of other middle molecules. This work was supported by Japan Agency for Medical Research and Development (AMED) under Grant Number 18ae0101047.

Keywords: fragment molecular orbital theory, membrane permeability, middle molecules, molecular dynamics simulation

Procedia PDF Downloads 145
97 Accumulation of Trace Metals in Leaf Vegetables Cultivated in High Traffic Areas in Ghent, Belgium

Authors: Veronique Troch, Wouter Van der Borght, Véronique De Bleeker, Bram Marynissen, Nathan Van der Eecken, Gijs Du Laing

Abstract:

Among the challenges associated with increased urban food production are health risks from food contamination, due to the higher pollution loads in urban areas, compared to rural sites. Therefore, the risks posed by industrial or traffic pollution of locally grown food, was defined as one of five high-priority issues of urban agriculture requiring further investigation. The impact of air pollution on urban horticulture is the subject of this study. More particular, this study focuses on the atmospheric deposition of trace metals on leaf vegetables cultivated in the city of Ghent, Belgium. Ghent is a particularly interesting study site as it actively promotes urban agriculture. Plants accumulate heavy metals by absorption from contaminated soils and through deposition on parts exposed to polluted air. Accumulation of trace metals in vegetation grown near roads has been shown to be significantly higher than those grown in rural areas due to traffic-related contaminants in the air. Studies of vegetables demonstrated, that the uptake and accumulation of trace metals differed among crop type, species, and among plant parts. Studies on vegetables and fruit trees in Berlin, Germany, revealed significant differences in trace metal concentrations depending on local traffic, crop species, planting style and parameters related to barriers between sampling site and neighboring roads. This study aims to supplement this scarce research on heavy metal accumulation in urban horticulture. Samples from leaf vegetables were collected from different sites, including allotment gardens, in Ghent. Trace metal contents on these leaf vegetables were analyzed by ICP-MS (inductively coupled plasma mass spectrometry). In addition, precipitation on each sampling site was collected by NILU-type bulk collectors and similarly analyzed for trace metals. On one sampling site, different parameters which might influence trace metal content in leaf vegetables were analyzed in detail. These parameters are distance of planting site to the nearest road, barriers between planting site and nearest road, and type of leaf vegetable. For comparison, a rural site, located farther from city traffic and industrial pollution, was included in this study. Preliminary results show that there is a high correlation between trace metal content in the atmospheric deposition and trace metal content in leaf vegetables. Moreover, a significant higher Pb, Cu and Fe concentration was found on spinach collected from Ghent, compared to spinach collected from a rural site. The distance of planting site to the nearest road significantly affected the accumulation of Pb, Cu, Mo and Fe on spinach. Concentrations of those elements on spinach increased with decreasing distance between planting site and the nearest road. Preliminary results did not show a significant effect of barriers between planting site and the nearest road on accumulation of trace metals on leaf vegetables. The overall goal of this study is to complete and refine existing guidelines for urban gardening to exclude potential health risks from food contamination. Accordingly, this information can help city governments and civil society in the professionalization and sustainable development of urban agriculture.

Keywords: atmospheric deposition, leaf vegetables, trace metals, traffic pollution, urban agriculture

Procedia PDF Downloads 209
96 Global Evidence on the Seasonality of Enteric Infections, Malnutrition, and Livestock Ownership

Authors: Aishwarya Venkat, Anastasia Marshak, Ryan B. Simpson, Elena N. Naumova

Abstract:

Livestock ownership is simultaneously linked to improved nutritional status through increased availability of animal-source protein, and increased risk of enteric infections through higher exposure to contaminated water sources. Agrarian and agro-pastoral households, especially those with cattle, goats, and sheep, are highly dependent on seasonally various environmental conditions, which directly impact nutrition and health. This study explores global spatiotemporally explicit evidence regarding the relationship between livestock ownership, enteric infections, and malnutrition. Seasonal and cyclical fluctuations, as well as mediating effects, are further examined to elucidate health and nutrition outcomes of individual and communal livestock ownership. The US Agency for International Development’s Demographic and Health Surveys (DHS) and the United Nations International Children's Emergency Fund’s Multi-Indicator Cluster Surveys (MICS) provide valuable sources of household-level information on anthropometry, asset ownership, and disease outcomes. These data are especially important in data-sparse regions, where surveys may only be conducted in the aftermath of emergencies. Child-level disease history, anthropometry, and household-level asset ownership information have been collected since DHS-V (2003-present) and MICS-III (2005-present). This analysis combines over 15 years of survey data from DHS and MICS to study 2,466,257 children under age five from 82 countries. Subnational (administrative level 1) measures of diarrhea prevalence, mean livestock ownership by type, mean and median anthropometric measures (height for age, weight for age, and weight for height) were investigated. Effects of several environmental, market, community, and household-level determinants were studied. Such covariates included precipitation, temperature, vegetation, the market price of staple cereals and animal source proteins, conflict events, livelihood zones, wealth indices and access to water, sanitation, hygiene, and public health services. Children aged 0 – 6 months, 6 months – 2 years, and 2 – 5 years of age were compared separately. All observations were standardized to interview day of year, and administrative units were harmonized for consistent comparisons over time. Geographically weighted regressions were constructed for each outcome and subnational unit. Preliminary results demonstrate the importance of accounting for seasonality in concurrent assessments of malnutrition and enteric infections. Household assets, including livestock, often determine the intensity of these outcomes. In many regions, livestock ownership affects seasonal fluxes in malnutrition and enteric infections, which are also directly affected by environmental and local factors. Regression analysis demonstrates the spatiotemporal variability in nutrition outcomes due to a variety of causal factors. This analysis presents a synthesis of evidence from global survey data on the interrelationship between enteric infections, malnutrition, and livestock. These results provide a starting point for locally appropriate interventions designed to address this nexus in a timely manner and simultaneously improve health, nutrition, and livelihoods.

Keywords: diarrhea, enteric infections, households, livestock, malnutrition, seasonality

Procedia PDF Downloads 98
95 Development and Experimental Validation of Coupled Flow-Aerosol Microphysics Model for Hot Wire Generator

Authors: K. Ghosh, S. N. Tripathi, Manish Joshi, Y. S. Mayya, Arshad Khan, B. K. Sapra

Abstract:

We have developed a CFD coupled aerosol microphysics model in the context of aerosol generation from a glowing wire. The governing equations can be solved implicitly for mass, momentum, energy transfer along with aerosol dynamics. The computationally efficient framework can simulate temporal behavior of total number concentration and number size distribution. This formulation uniquely couples standard K-Epsilon scheme with boundary layer model with detailed aerosol dynamics through residence time. This model uses measured temperatures (wire surface and axial/radial surroundings) and wire compositional data apart from other usual inputs for simulations. The model predictions show that bulk fluid motion and local heat distribution can significantly affect the aerosol behavior when the buoyancy effect in momentum transfer is considered. Buoyancy generated turbulence was found to be affecting parameters related to aerosol dynamics and transport as well. The model was validated by comparing simulated predictions with results obtained from six controlled experiments performed with a laboratory-made hot wire nanoparticle generator. Condensation particle counter (CPC) and scanning mobility particle sizer (SMPS) were used for measurement of total number concentration and number size distribution at the outlet of reactor cell during these experiments. Our model-predicted results were found to be in reasonable agreement with observed values. The developed model is fast (fully implicit) and numerically stable. It can be used specifically for applications in the context of the behavior of aerosol particles generated from glowing wire technique and in general for other similar large scale domains. Incorporation of CFD in aerosol microphysics framework provides a realistic platform to study natural convection driven systems/ applications. Aerosol dynamics sub-modules (nucleation, coagulation, wall deposition) have been coupled with Navier Stokes equations modified to include buoyancy coupled K-Epsilon turbulence model. Coupled flow-aerosol dynamics equation was solved numerically and in the implicit scheme. Wire composition and temperature (wire surface and cell domain) were obtained/measured, to be used as input for the model simulations. Model simulations showed a significant effect of fluid properties on the dynamics of aerosol particles. The role of buoyancy was highlighted by observation and interpretation of nucleation zones in the planes above the wire axis. The model was validated against measured temporal evolution, total number concentration and size distribution at the outlet of hot wire generator cell. Experimentally averaged and simulated total number concentrations were found to match closely, barring values at initial times. Steady-state number size distribution matched very well for sub 10 nm particle diameters while reasonable differences were noticed for higher size ranges. Although tuned specifically for the present context (i.e., aerosol generation from hotwire generator), the model can also be used for diverse applications, e.g., emission of particles from hot zones (chimneys, exhaust), fires and atmospheric cloud dynamics.

Keywords: nanoparticles, k-epsilon model, buoyancy, CFD, hot wire generator, aerosol dynamics

Procedia PDF Downloads 112
94 Influence of Atmospheric Pollutants on Child Respiratory Disease in Cartagena De Indias, Colombia

Authors: Jose A. Alvarez Aldegunde, Adrian Fernandez Sanchez, Matthew D. Menden, Bernardo Vila Rodriguez

Abstract:

Up to five statistical pre-processings have been carried out considering the pollutant records of the stations present in Cartagena de Indias, Colombia, also taking into account the childhood asthma incidence surveys conducted in hospitals in the city by the Health Ministry of Colombia for this study. These pre-processings have consisted of different techniques such as the determination of the quality of data collection, determination of the quality of the registration network, identification and debugging of errors in data collection, completion of missing data and purified data, as well as the improvement of the time scale of records. The characterization of the quality of the data has been conducted by means of density analysis of the pollutant registration stations using ArcGis Software and through mass balance techniques, making it possible to determine inconsistencies in the records relating the registration data between stations following the linear regression. The results obtained in this process have highlighted the positive quality in the pollutant registration process. Consequently, debugging of errors has allowed us to identify certain data as statistically non-significant in the incidence and series of contamination. This data, together with certain missing records in the series recorded by the measuring stations, have been completed by statistical imputation equations. Following the application of these prior processes, the basic series of incidence data for respiratory disease and pollutant records have allowed the characterization of the influence of pollutants on respiratory diseases such as, for example, childhood asthma. This characterization has been carried out using statistical correlation methods, including visual correlation, simple linear regression correlation and spectral analysis with PAST Software which identifies maximum periodicity cycles and minimums under the formula of the Lomb periodgram. In relation to part of the results obtained, up to eleven maximums and minimums considered contemporary between the incidence records and the particles have been identified taking into account the visual comparison. The spectral analyses that have been performed on the incidence and the PM2.5 have returned a series of similar maximum periods in both registers, which are at a maximum during a period of one year and another every 25 days (0.9 and 0.07 years). The bivariate analysis has managed to characterize the variable "Daily Vehicular Flow" in the ninth position of importance of a total of 55 variables. However, the statistical correlation has not obtained a favorable result, having obtained a low value of the R2 coefficient. The series of analyses conducted has demonstrated the importance of the influence of pollutants such as PM2.5 in the development of childhood asthma in Cartagena. The quantification of the influence of the variables has been able to determine that there is a 56% probability of dependence between PM2.5 and childhood respiratory asthma in Cartagena. Considering this justification, the study could be completed through the application of the BenMap Software, throwing a series of spatial results of interpolated values of the pollutant contamination records that exceeded the established legal limits (represented by homogeneous units up to the neighborhood level) and results of the impact on the exacerbation of pediatric asthma. As a final result, an economic estimate (in Colombian Pesos) of the monthly and individual savings derived from the percentage reduction of the influence of pollutants in relation to visits to the Hospital Emergency Room due to asthma exacerbation in pediatric patients has been granted.

Keywords: Asthma Incidence, BenMap, PM2.5, Statistical Analysis

Procedia PDF Downloads 86
93 Examining Three Psychosocial Factors of Tax Compliance in Self-Employed Individuals using the Mindspace Framework - Evidence from Australia and Pakistan

Authors: Amna Tariq Shah

Abstract:

Amid the pandemic, the contemporary landscape has experienced accelerated growth in small business activities and an expanding digital marketplace, further exacerbating the issue of non-compliance among self-employed individuals through aggressive tax planning and evasion. This research seeks to address these challenges by developing strategic tax policies that promote voluntary compliance and improve taxpayer facilitation. The study employs the innovative MINDSPACE framework to examine three psychosocial factors—tax communication, tax literacy, and shaming—to optimize policy responses, address administrative shortcomings, and ensure adequate revenue collection for public goods and services. Preliminary findings suggest that incomprehensible communication from tax authorities drives individuals to seek alternative, potentially biased sources of tax information, thereby exacerbating non-compliance. Furthermore, the study reveals low tax literacy among Australian and Pakistani respondents, with many struggling to navigate complex tax processes and comprehend tax laws. Consequently, policy recommendations include simplifying tax return filing and enhancing pre-populated tax returns. In terms of shaming, the research indicates that Australians, being an individualistic society, may not respond well to shaming techniques due to privacy concerns. In contrast, Pakistanis, as a collectivistic society, may be more receptive to naming and shaming approaches. The study employs a mixed-method approach, utilizing interviews and surveys to analyze the issue in both jurisdictions. The use of mixed methods allows for a more comprehensive understanding of tax compliance behavior, combining the depth of qualitative insights with the generalizability of quantitative data, ultimately leading to more robust and well-informed policy recommendations. By examining evidence from opposite jurisdictions, namely a developed country (Australia) and a developing country (Pakistan), the study's applicability is enhanced, providing perspectives from two disparate contexts that offer insights from opposite ends of the economic, cultural, and social spectra. The non-comparative case study methodology offers valuable insights into human behavior, which can be applied to other jurisdictions as well. The application of the MINDSPACE framework in this research is particularly significant, as it introduces a novel approach to tax compliance behavior analysis. By integrating insights from behavioral economics, the framework enables a comprehensive understanding of the psychological and social factors influencing taxpayer decision-making, facilitating the development of targeted and effective policy interventions. This research carries substantial importance as it addresses critical challenges in tax compliance and administration, with far-reaching implications for revenue collection and the provision of public goods and services. By investigating the psychosocial factors that influence taxpayer behavior and utilizing the MINDSPACE framework, the study contributes invaluable insights to the field of tax policy. These insights can inform policymakers and tax administrators in developing more effective tax policies that enhance taxpayer facilitation, address administrative obstacles, promote a more equitable and efficient tax system, and foster voluntary compliance, ultimately strengthening the financial foundation of governments and communities.

Keywords: individual tax compliance behavior, psychosocial factors, tax non-compliance, tax policy

Procedia PDF Downloads 48
92 Fuzzy Multi-Objective Approach for Emergency Location Transportation Problem

Authors: Bidzina Matsaberidze, Anna Sikharulidze, Gia Sirbiladze, Bezhan Ghvaberidze

Abstract:

In the modern world emergency management decision support systems are actively used by state organizations, which are interested in extreme and abnormal processes and provide optimal and safe management of supply needed for the civil and military facilities in geographical areas, affected by disasters, earthquakes, fires and other accidents, weapons of mass destruction, terrorist attacks, etc. Obviously, these kinds of extreme events cause significant losses and damages to the infrastructure. In such cases, usage of intelligent support technologies is very important for quick and optimal location-transportation of emergency service in order to avoid new losses caused by these events. Timely servicing from emergency service centers to the affected disaster regions (response phase) is a key task of the emergency management system. Scientific research of this field takes the important place in decision-making problems. Our goal was to create an expert knowledge-based intelligent support system, which will serve as an assistant tool to provide optimal solutions for the above-mentioned problem. The inputs to the mathematical model of the system are objective data, as well as expert evaluations. The outputs of the system are solutions for Fuzzy Multi-Objective Emergency Location-Transportation Problem (FMOELTP) for disasters’ regions. The development and testing of the Intelligent Support System were done on the example of an experimental disaster region (for some geographical zone of Georgia) which was generated using a simulation modeling. Four objectives are considered in our model. The first objective is to minimize an expectation of total transportation duration of needed products. The second objective is to minimize the total selection unreliability index of opened humanitarian aid distribution centers (HADCs). The third objective minimizes the number of agents needed to operate the opened HADCs. The fourth objective minimizes the non-covered demand for all demand points. Possibility chance constraints and objective constraints were constructed based on objective-subjective data. The FMOELTP was constructed in a static and fuzzy environment since the decisions to be made are taken immediately after the disaster (during few hours) with the information available at that moment. It is assumed that the requests for products are estimated by homeland security organizations, or their experts, based upon their experience and their evaluation of the disaster’s seriousness. Estimated transportation times are considered to take into account routing access difficulty of the region and the infrastructure conditions. We propose an epsilon-constraint method for finding the exact solutions for the problem. It is proved that this approach generates the exact Pareto front of the multi-objective location-transportation problem addressed. Sometimes for large dimensions of the problem, the exact method requires long computing times. Thus, we propose an approximate method that imposes a number of stopping criteria on the exact method. For large dimensions of the FMOELTP the Estimation of Distribution Algorithm’s (EDA) approach is developed.

Keywords: epsilon-constraint method, estimation of distribution algorithm, fuzzy multi-objective combinatorial programming problem, fuzzy multi-objective emergency location/transportation problem

Procedia PDF Downloads 292
91 Housing Recovery in Heavily Damaged Communities in New Jersey after Hurricane Sandy

Authors: Chenyi Ma

Abstract:

Background: The second costliest hurricane in U.S. history, Sandy landed in southern New Jersey on October 29, 2012, and struck the entire state with high winds and torrential rains. The disaster killed more than 100 people, left more than 8.5 million households without power, and damaged or destroyed more than 200,000 homes across the state. Immediately after the disaster, public policy support was provided in nine coastal counties that constituted 98% of the major and severely damaged housing units in NJ overall. The programs include Individuals and Households Assistance Program, Small Business Loan Program, National Flood Insurance Program, and the Federal Emergency Management Administration (FEMA) Public Assistance Grant Program. In the most severely affected counties, additional funding was provided through Community Development Block Grant: Reconstruction, Rehabilitation, Elevation, and Mitigation Program, and Homeowner Resettlement Program. How these policies individually and as a whole impacted housing recovery across communities with different socioeconomic and demographic profiles has not yet been studied, particularly in relation to damage levels. The concept of community social vulnerability has been widely used to explain many aspects of natural disasters. Nevertheless, how communities are vulnerable has been less fully examined. Community resilience has been conceptualized as a protective factor against negative impacts from disasters, however, how community resilience buffers the effects of vulnerability is not yet known. Because housing recovery is a dynamic social and economic process that varies according to context, this study examined the path from community vulnerability and resilience to housing recovery looking at both community characteristics and policy interventions. Sample/Methods: This retrospective longitudinal case study compared a literature-identified set of pre-disaster community characteristics, the effects of multiple public policy programs, and a set of time-variant community resilience indicators to changes in housing stock (operationally defined by percent of building permits to total occupied housing units/households) between 2010 and 2014, two years before and after Hurricane Sandy. The sample consisted of 51 municipalities in the nine counties in which between 4% and 58% of housing units suffered either major or severe damage. Structural equation modeling (SEM) was used to determine the path from vulnerability to the housing recovery, via multiple public programs, separately and as a whole, and via the community resilience indicators. The spatial analytical tool ArcGIS 10.2 was used to show the spatial relations between housing recovery patterns and community vulnerability and resilience. Findings: Holding damage levels constant, communities with higher proportions of Hispanic households had significantly lower levels of housing recovery while communities with households with an adult >age 65 had significantly higher levels of the housing recovery. The contrast was partly due to the different levels of total public support the two types of the community received. Further, while the public policy programs individually mediated the negative associations between African American and female-headed households and housing recovery, communities with larger proportions of African American, female-headed and Hispanic households were “vulnerable” to lower levels of housing recovery because they lacked sufficient public program support. Even so, higher employment rates and incomes buffered vulnerability to lower housing recovery. Because housing is the "wobbly pillar" of the welfare state, the housing needs of these particular groups should be more fully addressed by disaster policy.

Keywords: community social vulnerability, community resilience, hurricane, public policy

Procedia PDF Downloads 338
90 The Potential of Rhizospheric Bacteria for Mycotoxigenic Fungi Suppression

Authors: Vanja Vlajkov, Ivana PajčIn, Mila Grahovac, Marta Loc, Dragana Budakov, Jovana Grahovac

Abstract:

The rhizosphere soil refers to the plant roots' dynamic environment characterized by their inhabitants' high biological activity. Rhizospheric bacteria are recognized as effective biocontrol agents and considered cardinal in alternative strategies for securing ecological plant diseases management. The need to suppress fungal pathogens is an urgent task, not only because of the direct economic losses caused by infection but also due to their ability to produce mycotoxins with harmful effects on human health. Aspergillus and Fusarium species are well-known producers of toxigenic metabolites with a high capacity to colonize crops and enter the food chain. The bacteria belonging to the Bacillus genus has been conceded as a plant beneficial species in agricultural practice and identified as plant growth-promoting rhizobacteria (PGPR). Besides incontestable potential, the full commercialization of microbial biopesticides is in the preliminary phase. Thus, there is a constant need for estimating the suitability of novel strains to be used as a central point of viable bioprocess leading to market-ready product development. In the present study, 76 potential producing strains were isolated from the rhizosphere soil, sampled from different localities in the Autonomous Province of Vojvodina, Republic of Serbia. The selective isolation process of strains started by resuspending 1 g of soil samples in 9 ml of saline and incubating at 28° C for 15 minutes at 150 rpm. After homogenization, thermal treatment at 100° C for 7 minutes was performed. Dilution series (10-1-10-3) were prepared, and 500 µl of each was inoculated on nutrient agar plates and incubated at 28° C for 48 h. The pure cultures of morphologically different strains indicating belonging to the Bacillus genus were obtained by the spread-plate technique. The cultivation of the isolated strains was carried out in an Erlenmeyer flask for 96 h, at 28 °C, 170 rpm. The antagonistic activity screening included two phytopathogenic fungi as test microorganisms: Aspergillus sp. and Fusarium sp. The mycelial growth inhibition was estimated based on the antimicrobial activity testing of cultivation broth by the diffusion method. For the Aspergillus sp., the highest antifungal activity was recorded for the isolates Kro-4a and Mah-1a. In contrast, for the Fusarium sp., following 15 isolates exhibited the highest antagonistic effect Par-1, Par-2, Par-3, Par-4, Kup-4, Paš-1b, Pap-3, Kro-2, Kro-3a, Kro-3b, Kra-1a, Kra-1b, Šar-1, Šar-2b and Šar-4. One-way ANOVA was performed to determine the antagonists' effect statistical significance on inhibition zone diameter. Duncan's multiple range test was conducted to define homogenous groups of antagonists with the same level of statistical significance regarding their effect on antimicrobial activity of the tested cultivation broth against tested pathogens. The study results have pointed out the significant in vitro potential of the isolated strains to be used as biocontrol agents for the suppression of the tested mycotoxigenic fungi. Further research should include the identification and detailed characterization of the most promising isolates and mode of action of the selected strains as biocontrol agents. The following research should also involve bioprocess optimization steps to fully reach the selected strains' potential as microbial biopesticides and design cost-effective biotechnological production.

Keywords: Bacillus, biocontrol, bioprocess, mycotoxigenic fungi

Procedia PDF Downloads 169
89 Comparative Proteomic Profiling of Planktonic and Biofilms from Staphylococcus aureus Using Tandem Mass Tag-Based Mass Spectrometry

Authors: Arifur Rahman, Ardeshir Amirkhani, Honghua Hu, Mark Molloy, Karen Vickery

Abstract:

Introduction and Objectives: Staphylococcus aureus and coagulase-negative staphylococci comprises approximately 65% of infections associated with medical devices and are well known for their biofilm formatting ability. Biofilm-related infections are extremely difficult to eradicate owing to their high tolerance to antibiotics and host immune defences. Currently, there is no efficient method for early biofilm detection. A better understanding to enable detection of biofilm specific proteins in vitro and in vivo can be achieved by studying planktonic and different growth phases of biofilms using a proteome analysis approach. Our goal was to construct a reference map of planktonic and biofilm associated proteins of S. aureus. Methods: S. aureus reference strain (ATCC 25923) was used to grow 24 hours planktonic, 3-day wet biofilm (3DWB), and 12-day wet biofilm (12DWB). Bacteria were grown in tryptic soy broth (TSB) liquid medium. Planktonic growth was used late logarithmic bacteria, and the Centres for Disease Control (CDC) biofilm reactor was used to grow 3 days, and 12-day hydrated biofilms, respectively. Samples were subjected to reduction, alkylation and digestion steps prior to Multiplex labelling using Tandem Mass Tag (TMT) 10-plex reagent (Thermo Fisher Scientific). The labelled samples were pooled and fractionated by high pH RP-HPLC which followed by loading of the fractions on a nanoflow UPLC system (Eksigent UPLC system, AB SCIEX). Mass spectrometry (MS) data were collected on an Orbitrap Elite (Thermo Fisher Scientific) Mass Spectrometer. Protein identification and relative quantitation of protein levels were performed using Proteome Discoverer (version 1.3, Thermo Fisher Scientific). After the extraction of protein ratios with Proteome Discoverer, additional processing, and statistical analysis was done using the TMTPrePro R package. Results and Discussion: The present study showed that a considerable proteomic difference exists among planktonic and biofilms from S. aureus. We identified 1636 total extracellular secreted proteins, of which 350 and 137 proteins of 3DWB and 12DWB showed significant abundance variation from planktonic preparation, respectively. Of these, simultaneous up-regulation in between 3DWB and 12DWB proteins such as extracellular matrix-binding protein ebh, enolase, transketolase, triosephosphate isomerase, chaperonin, peptidase, pyruvate kinase, hydrolase, aminotransferase, ribosomal protein, acetyl-CoA acetyltransferase, DNA gyrase subunit A, glycine glycyltransferase and others we found in this biofilm producer. On the contrary, simultaneous down-regulation in between 3DWB and 12DWB proteins such as alpha and delta-hemolysin, lipoteichoic acid synthase, enterotoxin I, serine protease, lipase, clumping factor B, regulatory protein Spx, phosphoglucomutase, and others also we found in this biofilm producer. In addition, we also identified a big percentage of hypothetical proteins including unique proteins. Therefore, a comprehensive knowledge of planktonic and biofilm associated proteins identified by S. aureus will provide a basis for future studies on the development of vaccines and diagnostic biomarkers. Conclusions: In this study, we constructed an initial reference map of planktonic and various growth phase of biofilm associated proteins which might be helpful to diagnose biofilm associated infections.

Keywords: bacterial biofilms, CDC bioreactor, S. aureus, mass spectrometry, TMT

Procedia PDF Downloads 142
88 A 2-D and 3-D Embroidered Textrode Testing Framework Adhering to ISO Standards

Authors: Komal K., Cleary F., Wells J S.G., Bennett L

Abstract:

Smart fabric garments enable various monitoring applications across sectors such as healthcare, sports and fitness, and the military. Healthcare smart garments monitoring EEG, EMG, and ECG rely on the use of electrodes (dry or wet). However, such electrodes, when used for long-term monitoring, can cause discomfort and skin irritation for the wearer because of their inflexible structure and weight. Ongoing research has been investigating textile-based electrodes (textrodes) in order to provide more comfortable and usable fabric-based electrodes capable of providing intuitive biopotential monitoring. Progress has been made in this space, but they still face a critical design challenge in maintaining consistent skin contact, which directly impacts signal quality. Furthermore, there is a lack of an ISO-based testing framework to validate the electrode design and assess its ability to achieve enhanced performance, strength, usability, and durability. This study proposes the development and evaluation of an ISO-compliant testing framework for standard 2D and advanced 3D embroidered textrodes designs that have a unique structure in order to establish enhanced skin contact for the wearer. This testing framework leverages ISO standards: ISO 13934-1:2013 for tensile and zone-wise strength tests; ISO 13937-2 for tear tests; and ISO 6330 for washing, validating the textrode's performance, a necessity for wearables health parameter monitoring applications. Five textrodes (C1-C5) were designed using EPC win digitization software. Varying patterns such as running stitches, lock stitches, back-to-back stitches, and moss stitches were used to create various embroidered tetrodes samples using Madeira HC12 conductive thread with a resistivity of 100 ohm/m. The textrode designs were then fabricated using a ZSK technical embroidery machine. A comparative analysis was conducted based on a series of laboratory tests adhering to ISO compliance requirements. Tests focusing on the application of strain were applied to the textrodes, and these included: (1) analysis of the electrode's overall surface area strength; (2) assessment of the robustness of the textrodes boundaries; and (3) the assignment of fault test zones to each textrode, where vertical and horizontal slits of 3mm were applied to evaluate the performance of textrodes and its durability. Specific ISO-compliant tests linked to washing were conducted multiple times on each textrode sample to assess both mechanical and chemical damage. Additionally, abrasion and pilling tests were performed to evaluate mechanical damage on the surface of the textrodes and to compare it with the washing test. Finally, the textrodes were assessed based on morphological and surface resistance changes. Results demonstrate that textrode C4, featuring a 3-D layered structure consisting of foam, fabric, and conductive thread layers, significantly enhances skin-electrode contact for biopotential recording. The inclusion of a 3D foam layer was particularly effective in maintaining the shape of the electrode during strain tests, making it the top-performing textrode sample. Therefore, the layered 3D design structure of textrode C4 ranks highest when tested for durability, reusability, and washability. The ISO testing framework established in this study will support future research, validating the durability and reliability of textrodes for a wide range of applications.

Keywords: smart fabric, textrodes, testing framework, ISO compliant

Procedia PDF Downloads 32
87 An Artistic-Narrative Process for Reducing Suicide Risk Among Minority Stressed Individuals

Authors: Lewis Mehl-Madrona, Barbara Mainguy, Patrick McFarlane

Abstract:

Introduction: There are many risk factors for attempting suicide, including young age, “minority stress,” which would include Transgender and Gender Diverse orientations (TGD). The rate of TGD youths for suicide attempts is 3 times higher than heterosexual cis-gender youth. Half of TGD youth have seriously contemplated taking their own lives; of those, about half attempted suicide; and 18% of the TGD teenagers reported suicidal thoughts linked to their gender identity. Native American TGD have a six times higher suicide attempt rate. Conventional mental health has not generally helped these individuals. Stigma and discrimination contribute to healthcare disparities. Storytelling plays a crucial role in the development of human culture and individual identities. Sharing narrative artwork, creative writing, and personal stories allow people to build trust and to share their vulnerabilities. This helps people become aware of themselves in relation to others and gain a sense of comfort that their stories are similar; they may also be transformed in the process. Art provides a means to reach people who are otherwise difficult to engage in services. Methods: TGD individuals are recruited through a snowballing procedure. Following a life story interview, participants complete a scale of gender dysphoria, identification with conventional masculinity, patient-reported anxiety, and depression measure, and a quality-of-life scale. The interview completes the Columbia Suicide Scale. Following this, an artist and a therapist works with the participant to create a story related to their gender identity using the six-part story method. This story is then rendered to an artists’ book, which combines narrative with art (drawings, collage, computer images, etc.) and can take the form of a graphic novella, a zine, or a comic book. The pages can range from plain to ornate, as can the covers. Participants describe their process of making the books as the work unfolds and then participate in an exit interview at the completion of their book, remarking on what has changed for them and how the process affected them. Results: Preliminary results show high levels of suicidal thoughts among this population, as expected. Participants participate enthusiastically in the life story interview process and in the construction of a story related to gender identity. They enthusiastically participate in the studio process of putting their story into the form of a graphic novel, zine, or comic book. Participants reported feeling more comfortable with their TGD identity after the process and more able to resist negative judgments of family members and society. Suicidal thoughts diminish, and participants reported improved emotional wellbeing. Quantitative analysis of questionnaire data is underway Conclusions: A process in which narrative therapy is combined with art therapy shows promise for attracting and helping TGD individuals to reduce their risk for suicide without the stigma of going for mental health treatment. This process can be done outside of conventional mental health settings, on college and University campuses. This can provide an exciting alternative pathway for minority stressed and stigmatized individuals to engage in reflective, psychotherapeutic work without the trappings of psychotherapy or mental health treatment.

Keywords: minority stress, narrative process, artists' books, life story interview

Procedia PDF Downloads 144
86 Developing and integrated Clinical Risk Management Model

Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei

Abstract:

Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.

Keywords: failure modes and effective analysis, risk management, root cause analysis, model

Procedia PDF Downloads 219
85 A Microwave Heating Model for Endothermic Reaction in the Cement Industry

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

Microwave technology has been gaining importance in contributing to decarbonization processes in high energy demand industries. Despite the several numerical models presented in the literature, a proper Verification and Validation exercise is still lacking. This is important and required to evaluate the physical process model accuracy and adequacy. Another issue addresses impedance matching, which is an important mechanism used in microwave experiments to increase electromagnetic efficiency. Such mechanism is not available in current computational tools, thus requiring an external numerical procedure. A numerical model was implemented to study the continuous processing of limestone with microwave heating. This process requires the material to be heated until a certain temperature that will prompt a highly endothermic reaction. Both a 2D and 3D model were built in COMSOL Multiphysics to solve the two-way coupling between Maxwell and Energy equations, along with the coupling between both heat transfer phenomena and limestone endothermic reaction. The 2D model was used to study and evaluate the required numerical procedure, being also a benchmark test, allowing other authors to implement impedance matching procedures. To achieve this goal, a controller built in MATLAB was used to continuously matching the cavity impedance and predicting the required energy for the system, thus successfully avoiding energy inefficiencies. The 3D model reproduces realistic results and therefore supports the main conclusions of this work. Limestone was modeled as a continuous flow under the transport of concentrated species, whose material and kinetics properties were taken from literature. Verification and Validation of the coupled model was taken separately from the chemical kinetic model. The chemical kinetic model was found to correctly describe the chosen kinetic equation by comparing numerical results with experimental data. A solution verification was made for the electromagnetic interface, where second order and fourth order accurate schemes were found for linear and quadratic elements, respectively, with numerical uncertainty lower than 0.03%. Regarding the coupled model, it was demonstrated that the numerical error would diverge for the heat transfer interface with the mapped mesh. Results showed numerical stability for the triangular mesh, and the numerical uncertainty was less than 0.1%. This study evaluated limestone velocity, heat transfer, and load influence on thermal decomposition and overall process efficiency. The velocity and heat transfer coefficient were studied with the 2D model, while different loads of material were studied with the 3D model. Both models demonstrated to be highly unstable when solving non-linear temperature distributions. High velocity flows exhibited propensity to thermal runways, and the thermal efficiency showed the tendency to stabilize for the higher velocities and higher filling ratio. Microwave efficiency denoted an optimal velocity for each heat transfer coefficient, pointing out that electromagnetic efficiency is a consequence of energy distribution uniformity. The 3D results indicated the inefficient development of the electric field for low filling ratios. Thermal efficiencies higher than 90% were found for the higher loads and microwave efficiencies up to 75% were accomplished. The 80% fill ratio was demonstrated to be the optimal load with an associated global efficiency of 70%.

Keywords: multiphysics modeling, microwave heating, verification and validation, endothermic reactions modeling, impedance matching, limestone continuous processing

Procedia PDF Downloads 114
84 Agenesis of the Corpus Callosum: The Role of Neuropsychological Assessment with Implications to Psychosocial Rehabilitation

Authors: Ron Dick, P. S. D. V. Prasadarao, Glenn Coltman

Abstract:

Agenesis of the corpus callosum (ACC) is a failure to develop corpus callosum - the large bundle of fibers of the brain that connects the two cerebral hemispheres. It can occur as a partial or complete absence of the corpus callosum. In the general population, its estimated prevalence rate is 1 in 4000 and a wide range of genetic, infectious, vascular, and toxic causes have been attributed to this heterogeneous condition. The diagnosis of ACC is often achieved by neuroimaging procedures. Though persons with ACC can perform normally on intelligence tests they generally present with a range of neuropsychological and social deficits. The deficit profile is characterized by poor coordination of motor movements, slow reaction time, processing speed and, poor memory. Socially, they present with deficits in communication, language processing, the theory of mind, and interpersonal relationships. The present paper illustrates the role of neuropsychological assessment with implications to psychosocial management in a case of agenesis of the corpus callosum. Method: A 27-year old left handed Caucasian male with a history of ACC was self-referred for a neuropsychological assessment to assist him in his employment options. Parents noted significant difficulties with coordination and balance at an early age of 2-3 years and he was diagnosed with dyspraxia at the age of 14 years. History also indicated visual impairment, hypotonia, poor muscle coordination, and delayed development of motor milestones. MRI scan indicated agenesis of the corpus callosum with ventricular morphology, widely spaced parallel lateral ventricles and mild dilatation of the posterior horns; it also showed colpocephaly—a disproportionate enlargement of the occipital horns of the lateral ventricles which might be affecting his motor abilities and visual defects. The MRI scan ruled out other structural abnormalities or neonatal brain injury. At the time of assessment, the subject presented with such problems as poor coordination, slowed processing speed, poor organizational skills and time management, and difficulty with social cues and facial expressions. A comprehensive neuropsychological assessment was planned and conducted to assist in identifying the current neuropsychological profile to facilitate the formulation of a psychosocial and occupational rehabilitation programme. Results: General intellectual functioning was within the average range and his performance on memory-related tasks was adequate. Significant visuospatial and visuoconstructional deficits were evident across tests; constructional difficulties were seen in tasks such as copying a complex figure, building a tower and manipulating blocks. Poor visual scanning ability and visual motor speed were evident. Socially, the subject reported heightened social anxiety, difficulty in responding to cues in the social environment, and difficulty in developing intimate relationships. Conclusion: Persons with ACC are known to present with specific cognitive deficits and problems in social situations. Findings from the current neuropsychological assessment indicated significant visuospatial difficulties, poor visual scanning and problems in social interactions. His general intellectual functioning was within the average range. Based on the findings from the comprehensive neuropsychological assessment, a structured psychosocial rehabilitation programme was developed and recommended.

Keywords: agenesis, callosum, corpus, neuropsychology, psychosocial, rehabilitation

Procedia PDF Downloads 257
83 Moodle-Based E-Learning Course Development for Medical Interpreters

Authors: Naoko Ono, Junko Kato

Abstract:

According to the Ministry of Justice, 9,044,000 foreigners visited Japan in 2010. The number of foreign residents in Japan was over 2,134,000 at the end of 2010. Further, medical tourism has emerged as a new area of business. Against this background, language barriers put the health of foreigners in Japan at risk, because they have difficulty in accessing health care and communicating with medical professionals. Medical interpreting training is urgently needed in response to language problems resulting from the rapid increase in the number of foreign workers in Japan over recent decades. Especially, there is a growing need in medical settings in Japan to speak international languages for communication, with Tokyo selected as the host city of the 2020 Summer Olympics. Due to the limited number of practical activities on medical interpreting, it is difficult for learners to acquire the interpreting skills. In order to eliminate the shortcoming, a web-based English-Japanese medical interpreting training system was developed. We conducted a literature review to identify learning contents, core competencies for medical interpreters by using Pubmed, PsycINFO, Cochrane Library, and Google Scholar. Selected papers were investigated to find core competencies in medical interpreting. Eleven papers were selected through literature review indicating core competencies for medical interpreters. Core competencies in medical interpreting abstracted from the literature review, showed consistency in previous research whilst the content of the programs varied in domestic and international training programs for medical interpreters. Results of the systematic review indicated five core competencies: (a) maintaining accuracy and completeness; (b) medical terminology and understanding the human body; (c) behaving ethically and making ethical decisions; (d) nonverbal communication skills; and (e) cross-cultural communication skills. We developed an e-leaning program for training medical interpreters. A Web-based Medical Interpreter Training Program which cover these competencies was developed. The program included the following : online word list (Quizlet), allowing student to study online and on their smartphones; self-study tool (Quizlet) for help with dictation and spelling; word quiz (Quizlet); test-generating system (Quizlet); Interactive body game (BBC);Online resource for understanding code of ethics in medical interpreting; Webinar about non-verbal communication; and Webinar about incompetent vs. competent cultural care. The design of a virtual environment allows the execution of complementary experimental exercises for learners of medical interpreting and introduction to theoretical background of medical interpreting. Since this system adopts a self-learning style, it might improve the time and lack of teaching material restrictions of the classroom method. In addition, as a teaching aid, virtual medical interpreting is a powerful resource for the understanding how actual medical interpreting can be carried out. The developed e-learning system allows remote access, enabling students to perform experiments at their own place, without being physically in the actual laboratory. The web-based virtual environment empowers students by granting them access to laboratories during their free time. A practical example will be presented in order to show capabilities of the system. The developed web-based training program for medical interpreters could bridge the gap between medical professionals and patients with limited English proficiency.

Keywords: e-learning, language education, moodle, medical interpreting

Procedia PDF Downloads 332
82 The Influence of Screen Translation on Creative Audiovisual Writing: A Corpus-Based Approach

Authors: John D. Sanderson

Abstract:

The popularity of American cinema worldwide has contributed to the development of sociolects related to specific film genres in other cultural contexts by means of screen translation, in many cases eluding norms of usage in the target language, a process whose result has come to be known as 'dubbese'. A consequence for the reception in countries where local audiovisual fiction consumption is far lower than American imported productions is that this linguistic construct is preferred, even though it differs from common everyday speech. The iconography of film genres such as science-fiction, western or sword-and-sandal films, for instance, generates linguistic expectations in international audiences who will accept more easily the sociolects assimilated by the continuous reception of American productions, even if the themes, locations, characters, etc., portrayed on screen may belong in origin to other cultures. And the non-normative language (e.g., calques, semantic loans) used in the preferred mode of linguistic transfer, whether it is translation for dubbing or subtitling, has diachronically evolved in many cases into a status of canonized sociolect, not only accepted but also required, by foreign audiences of American films. However, a remarkable step forward is taken when this typology of artificial linguistic constructs starts being used creatively by nationals of these target cultural contexts. In the case of Spain, the success of American sitcoms such as Friends in the 1990s led Spanish television scriptwriters to include in national productions lexical and syntactical indirect borrowings (Anglicisms not formally identifiable as such because they include elements from their own language) in order to target audiences of the former. However, this commercial strategy had already taken place decades earlier when Spain became a favored location for the shooting of foreign films in the early 1960s. The international popularity of the then newly developed sub-genre known as Spaghetti-Western encouraged Spanish investors to produce their own movies, and local scriptwriters made use of the dubbese developed nationally since the advent of sound in film instead of using normative language. As a result, direct Anglicisms, as well as lexical and syntactical borrowings made up the creative writing of these Spanish productions, which also became commercially successful. Interestingly enough, some of these films were even marketed in English-speaking countries as original westerns (some of the names of actors and directors were anglified to that purpose) dubbed into English. The analysis of these 'back translations' will also foreground some semantic distortions that arose in the process. In order to perform the research on these issues, a wide corpus of American films has been used, which chronologically range from Stagecoach (John Ford, 1939) to Django Unchained (Quentin Tarantino, 2012), together with a shorter corpus of Spanish films produced during the golden age of Spaghetti Westerns, from una tumba para el sheriff (Mario Caiano; in English lone and angry man, William Hawkins) to tu fosa será la exacta, amigo (Juan Bosch, 1972; in English my horse, my gun, your widow, John Wood). The methodology of analysis and the conclusions reached could be applied to other genres and other cultural contexts.

Keywords: dubbing, film genre, screen translation, sociolect

Procedia PDF Downloads 134
81 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results

Authors: Athanasios Karasimos, Vasiliki Makri

Abstract:

This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.

Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes

Procedia PDF Downloads 55
80 Intrigues of Brand Activism versus Brand Antagonism in Rival Online Football Brand Communities: The Case of the Top Two Premier Football Clubs in Ghana

Authors: Joshua Doe, George Amoako

Abstract:

Purpose: In an increasingly digital world, the realm of sports fandom has extended its borders, creating a vibrant ecosystem of online communities centered around football clubs. This study ventures into the intricate interplay of motivations that drive football fans to respond to brand activism and its profound implications for brand antagonism and engagement among two of Ghana's most revered premier football clubs. Methods: A sample of 459 fervent fans from these two rival clubs were engaged through self-administered questionnaires expertly distributed via social media and online platforms. Data was analysed, using PLS-SEM. Findings: The tapestry of motivations that weave through these online football communities is as diverse as the fans themselves. It becomes apparent that fans are propelled by a spectrum of incentives. They seek education, yearn for information, revel in entertainment, embrace socialization, and fortify their self-esteem through their interactions within these digital spaces. Yet, it is the nuanced distinction in these motivations that shapes the trajectory of brand antagonism and engagement. Surprisingly, the study reveals a remarkable pattern. Football fans, despite their fierce rivalries, do not engage in brand antagonism based on educational pursuits, information-seeking endeavors, or socialization. Instead, it is motivations rooted in entertainment and self-esteem that serve as the fertile grounds for brand antagonism. Paradoxically, it is these very motivations coupled with the desire for socialization that nurture brand engagement, manifesting as active support and advocacy for their chosen club brand. Originality: Our research charters new waters by extending the boundaries of existing theories in the field. The Technology Acceptance Uses and Gratifications Theory, and Social Identity Theory all find new dimensions within the context of online brand community engagement. This not only deepens our understanding of the multifaceted world of online football fandom but also invites us to explore the implications these insights carry within the digital realm. Contribution to Practice: For marketers, our findings offer a treasure trove of actionable insights. They beckon the development of targeted content strategies that resonate with fan motivations. The implementation of brand advocacy programs, fostering opportunities for socialization, and the effective management of brand antagonism emerge as pivotal strategies. Furthermore, the utilization of data-driven insights is poised to refine consumer engagement strategies and strengthen brand affinity. Future Studies: For future studies, we advocate for longitudinal, cross-cultural, and qualitative studies that could shed further light on this topic. Comparative analyses across different types of online brand communities, an exploration of the role of brand community leaders, and inquiries into the factors that contribute to brand community dissolution all beckon the research community. Furthermore, understanding motivation-specific antagonistic behaviors and the intricate relationship between information-seeking and engagement present exciting avenues for further exploration. This study unfurls a vibrant tapestry of fan motivations, brand activism, and rivalry within online football communities. It extends a hand to scholars and marketers alike, inviting them to embark on a journey through this captivating digital realm, where passion, rivalry, and engagement harmonize to shape the world of sports fandom as we know it.

Keywords: online brand engagement, football fans, brand antagonism, motivations

Procedia PDF Downloads 35
79 Biostabilisation of Sediments for the Protection of Marine Infrastructure from Scour

Authors: Rob Schindler

Abstract:

Industry-standard methods of mitigating erosion of seabed sediments rely on ‘hard engineering’ approaches which have numerous environmental shortcomings: (1) direct loss of habitat by smothering of benthic species, (2) disruption of sediment transport processes, damaging geomorphic and ecosystem functionality (3) generation of secondary erosion problems, (4) introduction of material that may propagate non-local species, and (5) provision of pathways for the spread of invasive species. Recent studies have also revealed the importance of biological cohesion, the result of naturally occurring extra-cellular polymeric substances (EPS), in stabilizing natural sediments. Mimicking the strong bonding kinetics through the deliberate addition of EPS to sediments – henceforth termed ‘biostabilisation’ - offers a means in which to mitigate against erosion induced by structures or episodic increases in hydrodynamic forcing (e.g. storms and floods) whilst avoiding, or reducing, hard engineering. Here we present unique experiments that systematically examine how biostabilisation reduces scour around a monopile in a current, a first step to realizing the potential of this new method of scouring reduction for a wide range of engineering purposes in aquatic substrates. Experiments were performed in Plymouth University’s recirculating sediment flume which includes a recessed scour pit. The model monopile was 0.048 m in diameter, D. Assuming a prototype monopile diameter of 2.0 m yields a geometric ratio of 41.67. When applied to a 10 m prototype water depth this yields a model depth, d, of 0.24 m. The sediment pit containing the monopile was filled with different biostabilised substrata prepared using a mixture of fine sand (D50 = 230 μm) and EPS (Xanthan gum). Nine sand-EPS mixtures were examined spanning EPS contents of 0.0% < b0 < 0.50%. Scour development was measured using a laser point gauge along a 530 mm centreline at 10 mm increments at regular periods over 5 h. Maximum scour depth and excavated area were determined at different time steps and plotted against time to yield equilibrium values. After 5 hours the current was stopped and a detailed scan of the final scour morphology was taken. Results show that increasing EPS content causes a progressive reduction in the equilibrium depth and lateral extent of scour, and hence excavated material. Very small amounts equating to natural communities (< 0.1% by mass) reduce scour rate, depth and extent of scour around monopiles. Furthermore, the strong linear relationships between EPS content, equilibrium scour depth, excavation area and timescales of scouring offer a simple index on which to modify existing scour prediction methods. We conclude that the biostabilisation of sediments with EPS may offer a simple, cost-effective and ecologically sensitive means of reducing scour in a range of contexts including OWFs, bridge piers, pipeline installation, and void filling in rock armour. Biostabilisation may also reduce economic costs through (1) Use of existing site sediments, or waste dredged sediments (2) Reduced fabrication of materials, (3) Lower transport costs, (4) Less dependence on specialist vessels and precise sub-sea assembly. Further, its potential environmental credentials may allow sensitive use of the seabed in marine protection zones across the globe.

Keywords: biostabilisation, EPS, marine, scour

Procedia PDF Downloads 142
78 Biophilic Design Strategies: Four Case-Studies from Northern Europe

Authors: Carmen García Sánchez

Abstract:

The UN's 17 Sustainable Development Goals – specifically the nº 3 and nº 11- urgently call for new architectural design solutions at different design scales to increase human contact with nature in the health and wellbeing promotion of primarily urban communities. The discipline of Interior Design offers an important alternative to large-scale nature-inclusive actions which are not always possible due to space limitations. These circumstances provide an immense opportunity to integrate biophilic design, a complex emerging and under-developed approach that pursues sustainable design strategies for increasing the human-nature connection through the experience of the built environment. Biophilic design explores the diverse ways humans are inherently inclined to affiliate with nature, attach meaning to and derive benefit from the natural world. It represents a biological understanding of architecture which categorization is still in progress. The internationally renowned Danish domestic architecture built in the 1950´s and early 1960´s - a golden age of Danish modern architecture - left a leading legacy that has greatly influenced the domestic sphere and has further led the world in terms of good design and welfare. This study examines how four existing post-war domestic buildings establish a dialogue with nature and her variations over time. The case-studies unveil both memorable and unique biophilic resources through sophisticated and original design expressions, where transformative processes connect the users to the natural setting and reflect fundamental ways in which they attach meaning to the place. In addition, fascinating analogies in terms of this nature interaction with particular traditional Japanese architecture inform the research. They embody prevailing lessons for our time today. The research methodology is based on a thorough literature review combined with a phenomenological analysis into how these case-studies contribute to the connection between humans and nature, after conducting fieldwork throughout varying seasons to document understanding in nature transformations multi-sensory perception (via sight, touch, sound, smell, time and movement) as a core research strategy. The cases´ most outstanding features have been studied attending the following key parameters: 1. Space: 1.1. Relationships (itineraries); 1.2. Measures/scale; 2. Context: Context: Landscape reading in different weather/seasonal conditions; 3. Tectonic: 3.1. Constructive joints, elements assembly; 3.2. Structural order; 4. Materiality: 4.1. Finishes, 4.2. Colors; 4.3. Tactile qualities; 5. Daylight interplay. Departing from an artistic-scientific exploration this groundbreaking study provides sustainable practical design strategies, perspectives, and inspiration to boost humans´ contact with nature through the experience of the interior built environment. Some strategies are associated with access to outdoor space or require ample space, while others can thrive in a dense urban context without direct access to the natural environment. The objective is not only to produce knowledge, but to phase in biophilic design in the built environment, expanding its theory and practice into a new dimension. Its long-term vision is to efficiently enhance the health and well-being of urban communities through daily interaction with Nature.

Keywords: sustainability, biophilic design, architectural design, interior design, nature, Danish architecture, Japanese architecture

Procedia PDF Downloads 43
77 A Review on Biological Control of Mosquito Vectors

Authors: Asim Abbasi, Muhammad Sufyan, Iqra, Hafiza Javaria Ashraf

Abstract:

The share of vector-borne diseases (VBDs) in the global burden of infectious diseases is almost 17%. The advent of new drugs and latest research in medical science helped mankind to compete with these lethal diseases but still diseases transmitted by different mosquito species, including filariasis, malaria, viral encephalitis and dengue are serious threats for people living in disease endemic areas. Injudicious and repeated use of pesticides posed selection pressure on mosquitoes leading to development of resistance. Hence biological control agents are under serious consideration of scientific community to be used in vector control programmes. Fish have a history of predating immature stages of different aquatic insects including mosquitoes. The noteworthy examples in Africa and Asia includes, Aphanius discolour and a fish in the Panchax group. Moreover, common mosquito fish, Gambusia affinis predates mostly on temporary water mosquitoes like anopheline as compared to permanent water breeders like culicines. Mosquitoes belonging to genus Toxorhynchites have a worldwide distribution and are mostly associated with the predation of other mosquito larvae habituating with them in natural and artificial water containers. These species are harmless to humans as their adults do not suck human blood but feeds on floral nectar. However, their activity is mostly temperature dependent as Toxorhynchites brevipalpis consume 359 Aedes aegypti larvae at 30-32 ºC in contrast to 154 larvae at 20-26 ºC. Although many bacterial species were isolated from mosquito cadavers but those belonging to genus Bacillus are found highly pathogenic against them. The successful species of this genus include Bacillus thuringiensis and Bacillus sphaericus. The prime targets of B. thuringiensis are mostly the immatures of genus Aedes, Culex, Anopheles and Psorophora while B. sphaericus is specifically toxic against species of Culex, Psorophora and Culiseta. The entomopathogenic nematodes belonging to family, mermithidae are also pathogenic to different mosquito species. Eighty different species of mosquitoes including Anopheles, Aedes and Culex proved to be highly vulnerable to the attack of two mermithid species, Romanomermis culicivorax and R. iyengari. Cytoplasmic polyhedrosis virus was the first described pathogenic virus, isolated from the cadavers of mosquito specie, Culex tarsalis. Other viruses which are pathogenic to culicine includes, iridoviruses, cytopolyhedrosis viruses, entomopoxviruses and parvoviruses. Protozoa species belonging to division microsporidia are the common pathogenic protozoans in mosquito populations which kill their host by the chronic effects of parasitism. Moreover, due to their wide prevalence in anopheline mosquitoes and transversal and horizontal transmission from infected to healthy host, microsporidia of the genera Nosema and Amblyospora have received much attention in various mosquito control programmes. Fungal based mycopesticides are used in biological control of insect pests with 47 species reported virulent against different stages of mosquitoes. These include both aquatic fungi i.e. species of Coelomomyces, Lagenidium giganteum and Culicinomyces clavosporus, and the terrestrial fungi Metarhizium anisopliae and Beauveria bassiana. Hence, it was concluded that the integrated use of all these biological control agents can be a healthy contribution in mosquito control programmes and become a dire need of the time to avoid repeated use of pesticides.

Keywords: entomopathogenic nematodes, protozoa, Toxorhynchites, vector-borne

Procedia PDF Downloads 236
76 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 39
75 Influence of Thermal Annealing on Phase Composition and Structure of Quartz-Sericite Minerale

Authors: Atabaev I. G., Fayziev Sh. A., Irmatova Sh. K.

Abstract:

Raw materials with high content of Kalium oxide widely used in ceramic technology for prevention or decreasing of deformation of ceramic goods during drying process and under thermal annealing. Becouse to low melting temperature it is also used to decreasing of the temperature of thermal annealing during fabrication of ceramic goods [1,2]. So called “Porceline or China stones” - quartz-sericite (muscovite) minerals is also can be used for prevention of deformation as the content of Kalium oxide in muscovite is rather high (SiO2, + KAl2[AlSi3O10](OH)2). [3] . To estimation of possibility of use of this mineral for ceramic manufacture, in the presented article the influence of thermal processing on phase and a chemical content of this raw material is investigated. As well as to other ceramic raw materials (kaoline, white burning clays) the basic requirements of the industry to quality of "a porcelain stone» are following: small size of particles, relative high uniformity of disrtribution of components and phase, white color after burning, small content of colorant oxides or chromophores (Fe2O3, FeO, TiO2, etc) [4,5]. In the presented work natural minerale from the Boynaksay deposit (Uzbekistan) is investigated. The samples was mechanically polished for investigation by Scanning Electron Microscope. Powder with size of particle up to 63 μm was used to X-ray diffractometry and chemical analysis. The annealing of samples was performed at 900, 1120, 1350oC during 1 hour. Chemical composition of Boynaksay raw material according to chemical analysis presented in the table 1. For comparison the composition of raw materials from Russia and USA are also presented. In the Boynaksay quartz – sericite the average parity of quartz and sericite makes 55-60 and 30-35 % accordingly. The distribution of quartz and sericite phases in raw material was investigated using electron probe scanning electronic microscope «JEOL» JXA-8800R. In the figure 1 the scanning electron microscope (SEM) micrograps of the surface and the distributions of Al, Si and K atoms in the sample are presented. As it seen small granular, white and dense mineral includes quartz, sericite and small content of impurity minerals. Basically, crystals of quartz have the sizes from 80 up to 500 μm. Between quartz crystals the sericite inclusions having a tablet form with radiant structure are located. The size of sericite crystals is ~ 40-250 μm. Using data on interplanar distance [6,7] and ASTM Powder X-ray Diffraction Data it is shown that natural «a porcelain stone» quartz – sericite consists the quartz SiO2, sericite (muscovite type) KAl2[AlSi3O10](OH)2 and kaolinite Al203SiO22Н2О (See Figure 2 and Table 2). As it seen in the figure 3 and table 3a after annealing at 900oC the quartz – sericite contains quartz – SiO2 and muscovite - KAl2[AlSi3O10](OH)2, the peaks related with Kaolinite are absent. After annealing at 1120oC the full disintegration of muscovite and formation of mullite phase Al203 SiO2 is observed (the weak peaks of mullite appears in fig 3b and table 3b). After annealing at 1350oC the samples contains crystal phase of quartz and mullite (figure 3c and table 3с). Well known Mullite gives to ceramics high density, abrasive and chemical stability. Thus the obtained experimental data on formation of various phases during thermal annealing can be used for development of fabrication technology of advanced materials. Conclusion: The influence of thermal annealing in the interval 900-1350oC on phase composition and structure of quartz-sericite minerale is investigated. It is shown that during annealing the phase content of raw material is changed. After annealing at 1350oC the samples contains crystal phase of quartz and mullite (which gives gives to ceramics high density, abrasive and chemical stability).

Keywords: quartz-sericite, kaolinite, mullite, thermal processing

Procedia PDF Downloads 384
74 Remote BioMonitoring of Mothers and Newborns for Temperature Surveillance Using a Smart Wearable Sensor: Techno-Feasibility Study and Clinical Trial in Southern India

Authors: Prem K. Mony, Bharadwaj Amrutur, Prashanth Thankachan, Swarnarekha Bhat, Suman Rao, Maryann Washington, Annamma Thomas, N. Sheela, Hiteshwar Rao, Sumi Antony

Abstract:

The disease burden among mothers and newborns is caused mostly by a handful of avoidable conditions occurring around the time of childbirth and within the first month following delivery. Real-time monitoring of vital parameters of mothers and neonates offers a potential opportunity to impact access as well as the quality of care in vulnerable populations. We describe the design, development and testing of an innovative wearable device for remote biomonitoring (RBM) of body temperatures in mothers and neonates in a hospital in southern India. The architecture consists of: [1] a low-cost, wearable sensor tag; [2] a gateway device for ‘real-time’ communication link; [3] piggy-backing on a commercial GSM communication network; and [4] an algorithm-based data analytics system. Requirements for the device were: long battery-life upto 28 days (with sampling frequency 5/hr); robustness; IP 68 hermetic sealing; and human-centric design. We undertook pre-clinical laboratory testing followed by clinical trial phases I & IIa for evaluation of safety and efficacy in the following sequence: seven healthy adult volunteers; 18 healthy mothers; and three sets of babies – 3 healthy babies; 10 stable babies in the Neonatal Intensive Care Unit (NICU) and 1 baby with hypoxic ischaemic encephalopathy (HIE). The 3-coin thickness, pebble-design sensor weighing about 8 gms was secured onto the abdomen for the baby and over the upper arm for adults. In the laboratory setting, the response-time of the sensor device to attain thermal equilibrium with the surroundings was 4 minutes vis-a-vis 3 minutes observed with a precision-grade digital thermometer used as a reference standard. The accuracy was ±0.1°C of the reference standard within the temperature range of 25-40°C. The adult volunteers, aged 20 to 45 years, contributed a total of 345 hours of readings over a 7-day period and the postnatal mothers provided a total of 403 paired readings. The mean skin temperatures measured in the adults by the sensor were about 2°C lower than the axillary temperature readings (sensor =34.1 vs digital = 36.1); this difference was statistically significant (t-test=13.8; p<0.001). The healthy neonates provided a total of 39 paired readings; the mean difference in temperature was 0.13°C (sensor =36.9 vs digital = 36.7; p=0.2). The neonates in the NICU provided a total of 130 paired readings. Their mean skin temperature measured by the sensor was 0.6°C lower than that measured by the radiant warmer probe (sensor =35.9 vs warmer probe = 36.5; p < 0.001). The neonate with HIE provided a total of 25 paired readings with the mean sensor reading being not different from the radian warmer probe reading (sensor =33.5 vs warmer probe = 33.5; p=0.8). No major adverse events were noted in both the adults and neonates; four adult volunteers reported mild sweating under the device/arm band and one volunteer developed mild skin allergy. This proof-of-concept study shows that real-time monitoring of temperatures is technically feasible and that this innovation appears to be promising in terms of both safety and accuracy (with appropriate calibration) for improved maternal and neonatal health.

Keywords: public health, remote biomonitoring, temperature surveillance, wearable sensors, mothers and newborns

Procedia PDF Downloads 177