Search results for: prospective cohort studies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12324

Search results for: prospective cohort studies

534 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings

Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun

Abstract:

Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.

Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building

Procedia PDF Downloads 172
533 Effect of Endurance Training on Serum Chemerin Levels and Lipid Profile of Plasma in Obese Women

Authors: A. Moghadasein, M. Ghasemi, S. Fazelifar

Abstract:

Aim: Chemerin is a novel adipokine that play an important role in regulating lipid metabolism and abiogenesis. Chemerin is dependent on autocrine and paracrine signals for the differentiation and maturation of fat cells; it also regulates glucose uptake in fat cells and stimulates lipolysis. It has been reported that in adipocytes, chemerin enhances the insulin-stimulated glucose and causes the phosphorylation of tyrosine in Insulin receptor substrate. According to the studies, Chemerin may increase insulin sensitivity in adipose tissue and is largely associated with Body mass index, triglycerides, and blood pressure in those with normal glucose tolerance. There is limited information available regarding the effect of exercise training on serum chemerin concentrations. The purpose of this study was to investigate the effect of endurance training on serum chemerin levels and lipids of plasma in overweight women. Methodology: This study was a quasi-experimental research with a pre-post test design. After required examination and verification of high pressure by the physician, 22 obese subjects (age: 35.64±5.55 yr, weight: 75.62±9.30 kg, body mass index: 32.4±1.6 kg/m2) were randomly assigned to aerobic training (n= 12) and control (n= 12) groups. Participants completed a questionnaire indicating the lack of sports history during the past six months, the lack of anti-hypertension drugs use, hormone therapy, cardiovascular problems, and complete stoppage of menstrual cycle. Aerobic training was performed 3 times weekly for 8 weeks. Resting levels of chemerin plasma, metabolic parameters were measured prior to and after the intervention. The control group did not participate in any training program. In this study, ethical considerations included the complete description of the objectives to the study participants, ensuring the confidentiality of their information. Kolmogorov-Smirnov and Levin test were used for determining the normal distribution of data and homogeneity of variances, respectively. Analyze of variance with repeated measure were used to investigate the changes in the intra-group and the differences in inter-group of variables. Statistical operations were performed using SPSS 16 and the significance level of the tests was considered at P < 0.05. Results: After an 8 week aerobic training, levels of chemerin plasma were significantly decreased in aerobic trained group when compared with their control groups (p < 0.05).Concurrently, levels of HDL-c were significantly decreased (p < 0.05) whereas, levels of cholesterol, TG and LDL-c, showed no significant changes (p > 0.05). No significant correlations between chemerin levels and weight loss were observed in subjects with overweight women. Conclusion: The present study demonstrated, 8 weeks aerobic training, reduced serum chemerin concentrations in overweight women. Whereas, aerobic training exercise programmers affected the lipid profile response of obese subjects differently. However further research is warranted in order to unravel the molecular mechanism for the range of responses and the role of serum chemerin.

Keywords: chemerin, aerobic training, lipid profile, obese women

Procedia PDF Downloads 489
532 Preparation of Biodegradable Methacrylic Nanoparticles by Semicontinuous Heterophase Polymerization for Drugs Loading: The Case of Acetylsalicylic Acid

Authors: J. Roberto Lopez, Hened Saade, Graciela Morales, Javier Enriquez, Raul G. Lopez

Abstract:

Implementation of systems based on nanostructures for drug delivery applications have taken relevance in recent studies focused on biomedical applications. Although there are several nanostructures as drugs carriers, the use of polymeric nanoparticles (PNP) has been widely studied for this purpose, however, the main issue for these nanostructures is the size control below 50 nm with a narrow distribution size, due to they must go through different physiological barriers and avoid to be filtered by kidneys (< 10 nm) or the spleen (> 100 nm). Thus, considering these and other factors, it can be mentioned that drug-loaded nanostructures with sizes varying between 10 and 50 nm are preferred in the development and study of PNP/drugs systems. In this sense, the Semicontinuous Heterophase Polymerization (SHP) offers the possibility to obtain PNP in the desired size range. Considering the above explained, methacrylic copolymer nanoparticles were obtained under SHP. The reactions were carried out in a jacketed glass reactor with the required quantities of water, ammonium persulfate as initiator, sodium dodecyl sulfate/sodium dioctyl sulfosuccinate as surfactants, methyl methacrylate and methacrylic acid as monomers with molar ratio of 2/1, respectively. The monomer solution was dosed dropwise during reaction at 70 °C with a mechanical stirring of 650 rpm. Nanoparticles of poly(methyl methacrylate-co-methacrylic acid) were loaded with acetylsalicylic acid (ASA, aspirin) by a chemical adsorption technique. The purified latex was put in contact with a solution of ASA in dichloromethane (DCM) at 0.1, 0.2, 0.4 or 0.6 wt-%, at 35°C during 12 hours. According to the boiling point of DCM, as well as DCM and water densities, the loading process is completed when the whole DCM is evaporated. The hydrodynamic diameter was measured after polymerization by quasi-elastic light scattering and transmission electron microscopy, before and after loading procedures with ASA. The quantitative and qualitative analyses of PNP loaded with ASA were measured by infrared spectroscopy, differential scattering calorimetry and thermogravimetric analysis. Also, the molar mass distributions of polymers were determined in a gel permeation chromatograph apparatus. The load capacity and efficiency were determined by gravimetric analysis. The hydrodynamic diameter results for methacrylic PNP without ASA showed a narrow distribution with an average particle size around 10 nm and a composition methyl methacrylate/methacrylic acid molar ratio equal to 2/1, same composition of Eudragit S100, which is a commercial compound widely used as excipient. Moreover, the latex was stabilized in a relative high solids content (around 11 %), a monomer conversion almost 95 % and a number molecular weight around 400 Kg/mol. The average particle size in the PNP/aspirin systems fluctuated between 18 and 24 nm depending on the initial percentage of aspirin in the loading process, being the drug content as high as 24 % with an efficiency loading of 36 %. These average sizes results have not been reported in the literature, thus, the methacrylic nanoparticles here reported are capable to be loaded with a considerable amount of ASA and be used as a drug carrier.

Keywords: aspirin, biocompatibility, biodegradable, Eudragit S100, methacrylic nanoparticles

Procedia PDF Downloads 140
531 Teleconnection between El Nino-Southern Oscillation and Seasonal Flow of the Surma River and Possibilities of Long Range Flood Forecasting

Authors: Monika Saha, A. T. M. Hasan Zobeyer, Nasreen Jahan

Abstract:

El Nino-Southern Oscillation (ENSO) is the interaction between atmosphere and ocean in tropical Pacific which causes inconsistent warm/cold weather in tropical central and eastern Pacific Ocean. Due to the impact of climate change, ENSO events are becoming stronger in recent times, and therefore it is very important to study the influence of ENSO in climate studies. Bangladesh, being in the low-lying deltaic floodplain, experiences the worst consequences due to flooding every year. To reduce the catastrophe of severe flooding events, non-structural measures such as flood forecasting can be helpful in taking adequate precautions and steps. Forecasting seasonal flood with a longer lead time of several months is a key component of flood damage control and water management. The objective of this research is to identify the possible strength of teleconnection between ENSO and river flow of Surma and examine the potential possibility of long lead flood forecasting in the wet season. Surma is one of the major rivers of Bangladesh and is a part of the Surma-Meghna river system. In this research, sea surface temperature (SST) has been considered as the ENSO index and the lead time is at least a few months which is greater than the basin response time. The teleconnection has been assessed by the correlation analysis between July-August-September (JAS) flow of Surma and SST of Nino 4 region of the corresponding months. Cumulative frequency distribution of standardized JAS flow of Surma has also been determined as part of assessing the possible teleconnection. Discharge data of Surma river from 1975 to 2015 is used in this analysis, and remarkable increased value of correlation coefficient between flow and ENSO has been observed from 1985. From the cumulative frequency distribution of the standardized JAS flow, it has been marked that in any year the JAS flow has approximately 50% probability of exceeding the long-term average JAS flow. During El Nino year (warm episode of ENSO) this probability of exceedance drops to 23% and while in La Nina year (cold episode of ENSO) it increases to 78%. Discriminant analysis which is known as 'Categoric Prediction' has been performed to identify the possibilities of long lead flood forecasting. It has helped to categorize the flow data (high, average and low) based on the classification of predicted SST (warm, normal and cold). From the discriminant analysis, it has been found that for Surma river, the probability of a high flood in the cold period is 75% and the probability of a low flood in the warm period is 33%. A synoptic parameter, forecasting index (FI) has also been calculated here to judge the forecast skill and to compare different forecasts. This study will help the concerned authorities and the stakeholders to take long-term water resources decisions and formulate policies on river basin management which will reduce possible damage of life, agriculture, and property.

Keywords: El Nino-Southern Oscillation, sea surface temperature, surma river, teleconnection, cumulative frequency distribution, discriminant analysis, forecasting index

Procedia PDF Downloads 153
530 C-Coordinated Chitosan Metal Complexes: Design, Synthesis and Antifungal Properties

Authors: Weixiang Liu, Yukun Qin, Song Liu, Pengcheng Li

Abstract:

Plant diseases can cause the death of crops with great economic losses. Particularly, those diseases are usually caused by pathogenic fungi. Metal fungicides are a type of pesticide that has advantages of a low-cost, broad antimicrobial spectrum and strong sterilization effect. However, the frequent and wide application of traditional metal fungicides has caused serious problems such as environmental pollution, the outbreak of mites and phytotoxicity. Therefore, it is critically necessary to discover new organic metal fungicides alternatives that have a low metal content, low toxicity, and little influence on mites. Chitosan, the second most abundant natural polysaccharide next to cellulose, was proved to have broad-spectrum antifungal activity against a variety of fungi. However, the use of chitosan was limited due to its poor solubility and weaker antifungal activity compared with commercial fungicide. Therefore, in order to improve the water solubility and antifungal activity, many researchers grafted the active groups onto chitosan. The present work was to combine free metal ions with chitosan, to prepare more potent antifungal chitosan derivatives, thus, based on condensation reaction, chitosan derivative bearing amino pyridine group was prepared and subsequently followed by coordination with cupric ions, zinc ions and nickel ions to synthesize chitosan metal complexes. The calculations by density functional theory (DFT) show that the copper ions and nickel ions underwent dsp2 hybridization, the zinc ions underwent sp3 hybridization, and all of them are coordinated by the carbon atom in the p-π conjugate group and the oxygen atoms in the acetate ion. The antifungal properties of chitosan metal complexes against Phytophthora capsici (P. capsici), Gibberella zeae (G. zeae), Fusarium oxysporum (F. oxysporum) and Botrytis cinerea (B. cinerea) were also assayed. In addition, a plant toxicity experiment was carried out. The experiments indicated that the derivatives have significantly enhanced antifungal activity after metal ions complexation compared with the original chitosan. It was shown that 0.20 mg/mL of O-CSPX-Cu can 100% inhibit the growth of P. capsici and 0.20 mg/mL of O-CSPX-Ni can 87.5% inhibit the growth of B. cinerea. In general, their activities are better than the positive control oligosaccharides. The combination of the pyridine formyl groups seems to favor biological activity. Additionally, the ligand fashion was precisely analyzed, and the results revealed that the copper ions and nickel ions underwent dsp2 hybridization, the zinc ions underwent sp3 hybridization, and the carbon atoms of the p-π conjugate group and the oxygen atoms of acetate ion are involved in the coordination of metal ions. The phytotoxicity assay of O-CSPX-M was also conducted, unlike the traditional metal fungicides, the metal complexes were not significantly toxic to the leaves of wheat. O-CSPX-Zn can even increase chlorophyll content in wheat leaves at 0.40 mg/mL. This is mainly because chitosan itself promotes plant growth and counteracts the phytotoxicity of metal ions. The chitosan derivative described here may lend themselves to future applicative studies in crop protection.

Keywords: coordination, chitosan, metal complex, antifungal properties

Procedia PDF Downloads 316
529 Evaluation of Natural Frequency of Single and Grouped Helical Piles

Authors: Maryam Shahbazi, Amy B. Cerato

Abstract:

The importance of a systems’ natural frequency (fn) emerges when the vibration force frequency is equivalent to foundation's fn which causes response amplitude (resonance) that may cause irreversible damage to the structure. Several factors such as pile geometry (e.g., length and diameter), soil density, load magnitude, pile condition, and physical structure affect the fn of a soil-pile system; some of these parameters are evaluated in this study. Although experimental and analytical studies have assessed the fn of a soil-pile system, few have included individual and grouped helical piles. Thus, the current study aims to provide quantitative data on dynamic characteristics of helical pile-soil systems from full-scale shake table tests that will allow engineers to predict more realistic dynamic response under motions with variable frequency ranges. To evaluate the fn of single and grouped helical piles in dry dense sand, full-scale shake table tests were conducted in a laminar box (6.7 m x 3.0 m with 4.6 m high). Two different diameters (8.8 cm and 14 cm) helical piles were embedded in the soil box with corresponding lengths of 3.66m (excluding one pile with length of 3.96) and 4.27m. Different configurations were implemented to evaluate conditions such as fixed and pinned connections. In the group configuration, all four piles with similar geometry were tied together. Simulated real earthquake motions, in addition to white noise, were applied to evaluate the wide range of soil-pile system behavior. The Fast Fourier Transform (FFT) of measured time history responses using installed strain gages and accelerometers were used to evaluate fn. Both time-history records using accelerometer or strain gages were found to be acceptable for calculating fn. In this study, the existence of a pile reduced the fn of the soil slightly. Greater fn occurred on single piles with larger l/d ratios (higher slenderness ratio). Also, regardless of the connection type, the more slender pile group which is obviously surrounded by more soil, yielded higher natural frequencies under white noise, which may be due to exhibiting more passive soil resistance around it. Relatively speaking, within both pile groups, a pinned connection led to a lower fn than a fixed connection (e.g., for the same pile group the fn’s are 5.23Hz and 4.65Hz for fixed and pinned connections, respectively). Generally speaking, a stronger motion causes nonlinear behavior and degrades stiffness which reduces a pile’s fn; even more, reduction occurs in soil with a lower density. Moreover, fn of dense sand under white noise signal was obtained 5.03 which is reduced by 44% when an earthquake with the acceleration of 0.5g was applied. By knowing the factors affecting fn, the designer can effectively match the properties of the soil to a type of pile and structure to attempt to avoid resonance. The quantitative results in this study assist engineers in predicting a probable range of fn for helical pile foundations under potential future earthquake, and machine loading applied forces.

Keywords: helical pile, natural frequency, pile group, shake table, stiffness

Procedia PDF Downloads 133
528 The Lacuna in Understanding of Forensic Science amongst Law Practitioners in India

Authors: Poulomi Bhadra, Manjushree Palit, Sanjeev P. Sahni

Abstract:

Forensic science uses all branches of science for criminal investigation and trial and has increasingly emerged as an important tool in the administration of justice. However, the growth and development of this field in India has not been as rapid or widespread as compared to the more developed Western countries. For successful administration of justice, it is important that all agencies involved in law enforcement adopt an inter-professional approach towards forensic science, which is presently lacking. In light of the alarmingly high average acquittal rate in India, this study aims to examine the lack of understanding and appreciation of the importance and scope of forensic evidence and expert opinions amongst law professionals such as lawyers and judges. Based on a study of trial court cases from Delhi and surrounding areas, the study underline the areas in forensics where the criminal justice system has noticeably erred. Using this information, the authors examine the extent of forensic understanding amongst legal professionals and attempt to conclusively identify the areas in which they need further appraisal. A cross-sectional study done using a structured questionnaire was conducted amongst law professionals across age, gender, type and years of experience in court, to determine their understanding of DNA, fingerprints and other interdisciplinary scientific materials used as forensic evidence. In our study, we understand the levels of understanding amongst lawyers with regards to DNA and fingerprint evidence, and how it affects trial outcomes. We also aim to understand the factors that prevent credible and advanced awareness amongst legal personnel, amongst others. The survey identified the areas in modern and advanced forensics, such as forensic entomology, anthropology, cybercrime etc., in which Indian legal professionals are yet to attain a functional understanding. It also brings to light, what is commonly termed as the ‘CSI-effect’ in the Western courtrooms, and provides scope to study the existence of this phenomenon and its effects on the Indian courts and their judgements. This study highlighted the prevalence of unchallenged expert testimony presented by the prosecution in criminal trials and impressed upon the judicial system the need for independent analysis and evaluation of the scientist’s data and/or testimony by the defense. Overall, this study aims to define a clearer and rigid understanding of why legal professionals should have basic understanding of the interdisciplinary nature of forensic sciences. Based on the aforementioned findings, the author suggests various measures by which judges and lawyers might obtain an extensive knowledge of the advances and promising potentialities of forensic science. This includes promoting a forensic curriculum in legal studies at Bachelor’s and Master’s level as well as in mid-career professional courses. Formation of forensic-legal consultancies, in consultation with the Department of Justice, will not only assist in training police, military and law personnel but will also encourage legal research in this field. These suggestions also aim to bridge the communication gap that presently exists between law practitioners, forensic scientists and the general community’s awareness of the criminal justice system.

Keywords: forensic science, Indian legal professionals, interdisciplinary awareness, legal education

Procedia PDF Downloads 341
527 Mixed Monolayer and PEG Linker Approaches to Creating Multifunctional Gold Nanoparticles

Authors: D. Dixon, J. Nicol, J. A. Coulter, E. Harrison

Abstract:

The ease with which they can be functionalized, combined with their excellent biocompatibility, make gold nanoparticles (AuNPs) ideal candidates for various applications in nanomedicine. Indeed several promising treatments are currently undergoing human clinical trials (CYT-6091 and Auroshell). A successful nanoparticle treatment must first evade the immune system, then accumulate within the target tissue, before enter the diseased cells and delivering the payload. In order to create a clinically relevant drug delivery system, contrast agent or radiosensitizer, it is generally necessary to functionalize the AuNP surface with multiple groups; e.g. Polyethylene Glycol (PEG) for enhanced stability, targeting groups such as antibodies, peptides for enhanced internalization, and therapeutic agents. Creating and characterizing the biological response of such complex systems remains a challenge. The two commonly used methods to attach multiple groups to the surface of AuNPs are the creation of a mixed monolayer, or by binding groups to the AuNP surface using a bi-functional PEG linker. While some excellent in-vitro and animal results have been reported for both approaches further work is necessary to directly compare the two methods. In this study AuNPs capped with both PEG and a Receptor Mediated Endocytosis (RME) peptide were prepared using both mixed monolayer and PEG linker approaches. The PEG linker used was SH-PEG-SGA which has a thiol at one end for AuNP attachment, and an NHS ester at the other to bind to the peptide. The work builds upon previous studies carried out at the University of Ulster which have investigated AuNP synthesis, the influence of PEG on stability in a range of media and investigated intracellular payload release. 18-19nm citrate capped AuNPs were prepared using the Turkevich method via the sodium citrate reduction of boiling 0.01wt% Chloroauric acid. To produce PEG capped AuNPs, the required amount of PEG-SH (5000Mw) or SH-PEG-SGA (3000Mw Jenkem Technologies) was added, and the solution stirred overnight at room temperature. The RME (sequence: CKKKKKKSEDEYPYVPN, Biomatik) co-functionalised samples were prepared by adding the required amount of peptide to the PEG capped samples and stirring overnight. The appropriate amounts of PEG-SH and RME peptide were added to the AuNP to produce a mixed monolayer consisting of approximately 50% PEG and 50% RME. The PEG linker samples were first fully capped with bi-functional PEG before being capped with RME peptide. An increase in diameter from 18-19mm for the ‘as synthesized’ AuNPs to 40-42nm after PEG capping was observed via DLS. The presence of PEG and RME peptide on both the mixed monolayer and PEG linker co-functionalized samples was confirmed by both FTIR and TGA. Bi-functional PEG linkers allow the entire AuNP surface to be capped with PEG, enabling in-vitro stability to be achieved using a lower molecular weight PEG. The approach also allows the entire outer surface to be coated with peptide or other biologically active groups, whilst also offering the promise of enhanced biological availability. The effect of mixed monolayer versus PEG linker attachment on both stability and non-specific protein corona interactions was also studied.

Keywords: nanomedicine, gold nanoparticles, PEG, biocompatibility

Procedia PDF Downloads 339
526 Addressing Housing Issue at Regional Level Planning: A Case Study of Mumbai Metropolitan Region

Authors: Bhakti Chitale

Abstract:

Mumbai city, which is the business capital of India and one of the most crowded cities in the world, holds the biggest slum in Asia. The Mumbai Metropolitan Region (MMR) occupies an area of 4035 sq.km. with a population of 22.8 million people. This population is mostly urban with 91% of this population living in areas of Municipal Corporations and Councils. Another 3% live in Census Towns. The region has 9 Municipal Corporations, 8 Municipal councils, and around 1000 villages. On the one hand MMR reflects the highest contribution to the Nations overall economy and on the other hand it shows the horrible and intolerable picture of about 2 million people, who are living in slums/without even slum with totally unhygienic conditions and with total loss of hope. The generations are about to get affected adversely if the solution is not worked out. This study is an attempt towards working out the solution. Mumbai Metropolitan Region Development Authority (MMRDA) is state government's authority, specially formed to govern the development of MMR. MMRDA is engaged in long term planning, promotion of new growth centres, implementation of strategic projects and financing infrastructure development. While preparing the master plan for MMR for next 20 years MMRDA conducted a detail study regarding Housing scenario in MMR and possible options for improvement. The author was the in charge officer for the said assignment. This paper puts light on the interesting outcomes of the research study, which ranges from the adverse effects of government policies, automatic responses of housing market, effects on planning processes, and overall changing needs of housing patterns in the world due to changes in the social mechanism. It alarms the urban planners who usually focus on smart infrastructure development, about allied future dangers. This housing study will explain the complexities, realities and needs of innovations in the housing policies all over the world. The paper will explain further few success stories and failure stories of government initiatives with reasons. It gives the clear idea about the differences in needs of housing for people from different economic groups and direct and indirect market pressures on low cost housing. Magical phenomenon came in front like a large percentage of vacant houses is present in spite of the huge need. Housing market gets affected by the developments or any other physical and financial changes taking place in the nearby areas or cities, also by changes in cities which are located far from the region and also by the international investments or policy changes. Instead of just depending on governments actions in case of generation of affordable housing, it becomes equally important to make the housing markets automatically generate such stock and still make them sustainable is the aim of all the movement. In summary, we may say that the paper will sequentially elaborate the complete dynamics of housing in one of the most crowded urban area in the world that is Mumbai Metropolitan Region, with a lot of data, analysis, case studies, and recommendations.

Keywords: Mumbai India, slum housing, region planning, market recommendations

Procedia PDF Downloads 280
525 Inequality and Poverty Assessment on Affordable Housing in Austria: A Comprehensive Perspective on SDG 1 and SDG 10 (UniNEtZ Project)

Authors: M. Bukowski, K. Kreissl

Abstract:

Social and environmental pressures in our times bear threats that often cross-border in scale, such as climate change, poverty-driven migration, demographic change as well as socio-economic developments. One of the hot topics is prevailing in many societies across Europe and worldwide, concerns 'affordable housing' and poverty-driven international and domestic migration (including displacements through gentrification processes), focusing here on the urban and regional context. The right to adequate housing and shelter is one of the recognized in the Universal Declaration of Human rights and International Covenant on Economic, Social and Cultural Rights, and as such considered as a human right of the second generation. The decreasing supply of affordable housing, especially in urban areas, has reached dimensions that have led to an increasing 'housing crisis'. This crisis, which has even reached middle-income homes, has an even more devastating impact on low income and poor households raising poverty levels. Therefore, the understanding of the connection between housing and poverty is vital to integrate and support the different stakeholders in order to tackle poverty. When it comes to issues of inequalities and poverty within the SDG framework, multi-faceted stakeholders with different claims, distribution of resources and interactions with other development goals (spill-over and trade-offs) account for a highly complex context. To contribute to a sustainable and fair society and hence to support the UN Sustainable Development Goals, the University of Salzburg participates in the Austrian-wide universities' network 'UniNEtZ'. Our joint target is to develop an options report for the Austrian Government regarding the seventeen SDGs, so far hosted by 18 Austrian universities. In this vein, the University of Salzburg; i.e., the Centre for Ethics and Poverty Research, the departments of Geography and Geology and the Department of Sociology and Political Science are focusing on the SDG 1 (No Poverty) and SDG 10 (Reduced Inequalities). Our target and research focus is to assess and evaluate the status of SDG 1 and 10 in Austria, to find possible solutions and to support stakeholders' integration. We aim at generating and deducing appropriate options as scientific support, from interdisciplinary research studies to 'Sustainability Developing Goals and their Targets' in action. For this reason, and to deal with the complexity of the Agenda 2030, we have developed a special Model for Inequalities and Poverty Assessment (IPAM). Through the example of 'affordable housing' we provide insight into the situation focusing on sustainable outcomes, including ethical and justice perceptions. The IPAM has proven to be a helpful tool in detecting the different imponderables on the Agenda 2030, assessing the situation, showing gaps and options for ethical SDG actions combining different SDG targets. Supported by expert and expert group interviews, this assessment allows different stakeholders to overview a complex and dynamic SDG challenge (here housing) which is necessary to be involved in an action finding process.

Keywords: affordable housing, inequality, poverty, sustainable development goals

Procedia PDF Downloads 105
524 Effects of AI-driven Applications on Bank Performance in West Africa

Authors: Ani Wilson Uchenna, Ogbonna Chikodi

Abstract:

This study examined the impact of artificial intelligence driven applications on banks’ performance in West Africa using Nigeria and Ghana as case studies. Specifically, the study examined the extent to which deployment of smart automated teller machine impacts the banks’ net worth within the reference period in Nigeria and Ghana. It ascertained the impact of point of sale on banks’ net worth within the reference period in Nigeria and Ghana. Thirdly, it verified the extent to which webpay services can influence banks’ performance in Nigeria and Ghana and finally, determined the impact of mobile pay services on banks’ performance in Nigeria and Ghana. The study used automated teller machine (ATM), Point of sale services (POS), Mobile pay services (MOP) and Web pay services (WBP) as proxies for explanatory variables while Bank net worth was used as explained variable for the study. The data for this study were sourced from central bank of Nigeria (CBN) Statistical Bulletin as well as Bank of Ghana (BoGH) Statistical Bulletin, Ghana payment systems oversight annual report and world development indicator (WDI). Furthermore, the mixed order of integration observed from the panel unit test result justified the use of autoregressive distributed lag (ARDL) approach to data analysis which the study adopted. While the cointegration test showed the existence of cointegration among the studied variables, bound test result justified the presence of long-run relationship among the series. Again, ARDL error correction estimate established satisfactory (13.92%) speed of adjustment from long run disequilibrium back to short run dynamic relationship. The study found that while Automated teller machine (ATM) had statistically significant impact on bank net worth (BNW) of Nigeria and Ghana, point of sale services application (POS) statistically and significantly impact on bank net worth within the study period, mobile pay services application was statistically significant in impacting the changes in the bank net worth of the countries of study while web pay services (WBP) had no statistically significant impact on bank net worth of the countries of reference. The study concluded that artificial intelligence driven application have significant an positive impact on bank performance with exception of web pay which had negative impact on bank net worth. The study recommended that management of banks both in Nigerian and Ghanaian should encourage more investments in AI-powered smart ATMs aimed towards delivering more secured banking services in order to increase revenue, discourage excessive queuing in the banking hall, reduced fraud and minimize error in processing transaction. Banks within the scope of this study should leverage on modern technologies to checkmate the excesses of the private operators POS in order to build more confidence on potential customers. Government should convert mobile pay services to a counter terrorism tool by ensuring that restrictions on over-the-counter withdrawals to a minimum amount is maintained and place sanctions on withdrawals above that limit.

Keywords: artificial intelligence (ai), bank performance, automated teller machines (atm), point of sale (pos)

Procedia PDF Downloads 7
523 Transformative Economic Policies in India: A Political Economy Analysis of IMF Influence, Sectoral Shifts, and Political Transitions

Authors: Vrajesh Rawal

Abstract:

India's economic landscape has witnessed significant transformations over the past decades, characterized by shifts from agrarian to service-oriented economies. Recently, there has been a growing emphasis on transitioning towards a manufacturing-led growth model driven by factors such as demographic changes, technological advancements, and evolving global trade dynamics. These changes reflect broader efforts to enhance industrialization, boost employment opportunities, and diversify the economic base beyond traditional sectors. Within this context, this research focuses on understanding the specific drivers and dynamics behind India's shift from a predominantly service-based economy to one centered on manufacturing. It seeks to explore how political ideologies influence economic policies and shape sectoral priorities, with a particular focus on contrasting approaches between the Indian National Congress (INC) and the Bharatiya Janata Party (BJP). Additionally, the study evaluates the alignment of IMF policy recommendations with India's economic goals and priorities within the theoretical frameworks of neoliberalism and political economy theory. Despite the extensive literature on India's economic reforms and political economy, there remains a gap in understanding how political ideology influences sectoral shifts and economic policy outcomes, particularly in the context of IMF recommendations. Existing studies often focus narrowly on either political ideologies or economic reforms without fully integrating both perspectives. This research aims to bridge this gap by providing a comprehensive analysis that integrates political economy theories with empirical evidence from political speeches, government documents, and IMF reports. Through qualitative content analysis of speeches by political leaders, document analysis of key governmental documents, and scrutiny of party manifestos, this research demonstrates how political ideologies translate into distinct economic strategies and developmental agendas. It highlights the extent to which IMF policy prescriptions align with India's economic objectives and how these interactions shape broader socio-economic outcomes. The theoretical framework of neoliberalism and political economy theory provides a lens to interpret these findings, offering insights into the complex interplay between economic policies, political ideologies, and institutional frameworks in India. The findings of this study are expected to provide valuable insights for policymakers, researchers, and practitioners involved in economic governance and development planning in India. By understanding the factors driving sectoral shifts and the influence of political ideologies on economic policies, policymakers can make informed decisions to foster sustainable economic growth and development. Implementation of these insights could contribute to refining policy frameworks, enhancing alignment with national development priorities, and optimizing engagement with international financial institutions like the IMF to better meet India's socio-economic challenges and opportunities in the evolving global context.

Keywords: political economy, international politics, social science, policy analysis

Procedia PDF Downloads 32
522 Synergistic Studies of Liposomes of Clove and Cinnamon Oil in Oral Health Care

Authors: Sandhya Parameswaran, Prajakta Dhuri

Abstract:

Despite great improvements in health care, the world oral health report states that dental problems still persist, particularly among underprivileged groups in both developing and developed countries. Dental caries and periodontal diseases are identified as the most important oral health problems globally. Acidic foods and beverages can affect natural teeth, and chronic exposure often leads to the development of dental erosion, abrasion, and decay. In recent years, there has been an increased interest toward essential oils. These are secondary metabolites and possess antibacterial, antifungal and antioxidant properties. Essential oils are volatile and chemically unstable in the presence of air, light, moisture and high temperature. Hence many novel methods like a liposomal encapsulation of oils have been introduced to enhance the stability and bioavailability. This research paper focuses on two essential oils, clove and cinnamon oil. Clove oil was obtained from Syzygium aromaticum Linn using clavengers apparatus. It contains eugenol and β caryophyllene. Cinnamon oil, from the barks of Cinnamomum cassia, contains cinnamaldehyde, The objective of the current research was to develop a liposomal carrier system containing clove and cinnamon oil and study their synergistic activity against dental pathogens when formulated as a gel. Methodology: The essential oil were first tested for their antimicrobial activity against dental pathogens, Lactobacillus acidophillus (MTCC No. 10307, MRS broth) and Streptococcus Mutans (MTCC No .890, Brain Heart Infusion agar). The oils were analysed by UV spectroscopy for eugenol and cinnamaldehyde content. Standard eugenol was linear between 5ppm to 25ppm at 282nm and standard cinnamaldehde from 1ppm to 5pmm at 284nm. The concentration of eugenol in clove oil was found to be 62.65 % w/w, and that of cinnamaldehyde was found to be 5.15%s w/w. The oils were then formulated into liposomes. Liposomes were prepared by thin film hydration method using Phospholipid, Cholesterol, and other oils dissolved in a chloroform methanol (3:1) mixture. The organic solvent was evaporated in a rotary evaporator above lipid transition temperature. The film was hydrated with phosphate buffer (pH 5.5).The various batches of liposomes were characterized and compared for their size, loading rate, encapsulation efficiency and morphology. The prepared liposomes when evaluated for entrapment efficiency showed 65% entrapment for clove and 85% for cinnamon oil. They were also tested for their antimicrobial activity against dental pathogens and their synergistic activity studied. Based on the activity and the entrapment efficiency the amount of liposomes required to prepare 1gm of the gel was calculated. The gel was prepared using a simple ointment base and contained 0.56% of cinnamon and clove liposomes. A simultaneous method of analysis for eugenol and cinnamaldehyde.was then developed using HPLC. The prepared gels were then studied for their stability as per ICH guidelines. Conclusion: It was found that liposomes exhibited spherical shaped vesicles and protected the essential oil from degradation. Liposomes, therefore, constitute a suitable system for encapsulation of volatile, unstable essential oil constituents.

Keywords: cinnamon oil, clove oil, dental caries, liposomes

Procedia PDF Downloads 194
521 Sustainable Pavements with Reflective and Photoluminescent Properties

Authors: A.H. Martínez, T. López-Montero, R. Miró, R. Puig, R. Villar

Abstract:

An alternative to mitigate the heat island effect is to pave streets and sidewalks with pavements that reflect incident solar energy, keeping their surface temperature lower than conventional pavements. The “Heat island mitigation to prevent global warming by designing sustainable pavements with reflective and photoluminescent properties (RELUM) Project” has been carried out with this intention in mind. Its objective has been to develop bituminous mixtures for urban pavements that help in the fight against global warming and climate change, while improving the quality of life of citizens. The technology employed has focused on the use of reflective pavements, using bituminous mixes made with synthetic bitumens and light pigments that provide high solar reflectance. In addition to this advantage, the light surface colour achieved with these mixes can improve visibility, especially at night. In parallel and following the latter approach, an appropriate type of treatment has also been developed on bituminous mixtures to make them capable of illuminating at night, giving rise to photoluminescent applications, which can reduce energy consumption and increase road safety due to improved night-time visibility. The work carried out consisted of designing different bituminous mixtures in which the nature of the aggregate was varied (porphyry, granite and limestone) and also the colour of the mixture, which was lightened by adding pigments (titanium dioxide and iron oxide). The reflectance of each of these mixtures was measured, as well as the temperatures recorded throughout the day, at different times of the year. The results obtained make it possible to propose bituminous mixtures whose characteristics can contribute to the reduction of urban heat islands. Among the most outstanding results is the mixture made with synthetic bitumen, white limestone aggregate and a small percentage of titanium dioxide, which would be the most suitable for urban surfaces without road traffic, given its high reflectance and the greater temperature reduction it offers. With this solution, a surface temperature reduction of 9.7°C is achieved at the beginning of the night in the summer season with the highest radiation. As for luminescent pavements, paints with different contents of strontium aluminate and glass microspheres have been applied to asphalt mixtures, and the luminance of all the applications designed has been measured by exciting them with electric bulbs that simulate the effect of sunlight. The results obtained at this stage confirm the ability of all the designed dosages to emit light for a certain time, varying according to the proportions used. Not only the effect of the strontium aluminate and microsphere content has been observed, but also the influence of the colour of the base on which the paint is applied; the lighter the base, the higher the luminance. Ongoing studies are focusing on the evaluation of the durability of the designed solutions in order to determine their lifetime.

Keywords: heat island, luminescent paints, reflective pavement, temperature reduction

Procedia PDF Downloads 30
520 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 326
519 Developing and Standardizing Individual Care Plan for Children in Conflict with Law in the State of Kerala

Authors: Kavitha Puthanveedu, Kasi Sekar, Preeti Jacob, Kavita Jangam

Abstract:

In India, The Juvenile Justice (Care and Protection of Children) Act, 2015, the law related to children alleged and found to be in conflict with law, proposes to address to the rehabilitation of children in conflict with law by catering to the basic rights by providing care and protection, development, treatment, and social re-integration. A major concern in addressing the issues of children in conflict with law in Kerala the southernmost state in India identified were: 1. Lack of psychological assessment for children in conflict with law, 2. Poor psychosocial intervention for children in conflict with law on bail, 3. Lack of psychosocial intervention or proper care and protection of CCL residing at observation and special home, 4. Lack convergence with systems related with mental health care. Aim: To develop individual care plan for children in conflict with law. Methodology: NIMHANS a premier Institute of Mental Health and Neurosciences, collaborated with Social Justice Department, Govt. of Kerala to address this issue by developing a participatory methodology to implement psychosocial care in the existing services by integrating the activities through multidisciplinary and multisectoral approach as per the Sec. 18 of JJAct 2015. Developing individual care plan: Key informant interviews, focus group discussion with multiple stakeholders consisting of legal officers, police, child protection officials, counselors, and home staff were conducted. Case studies were conducted among children in conflict with law. A checklist on 80 psychosocial problems among children in conflict with law was prepared with eight major issues identified through the quantitative process such as family and parental characteristic, family interactions and relationships, stressful life event, social and environmental factors, child’s individual characteristics, education, child labour and high-risk behavior. Standardised scales were used to identify the anxiety, caseness, suicidality and substance use among the children. This provided a background data understand the psychosocial problems experienced by children in conflict with law. In the second stage, a detailed plan of action was developed involving multiple stakeholders that include Special juvenile police unit, DCPO, JJB, and NGOs. The individual care plan was reviewed by a panel of 4 experts working in the area of children, followed by the review by multiple stakeholders in juvenile justice system such as Magistrates, JJB members, legal cum probation officers, district child protection officers, social workers and counselors. Necessary changes were made in the individual care plan in each stage which was pilot tested with 45 children for a period of one month and standardized for administering among children in conflict with law. Result: The individual care plan developed through scientific process was standardized and currently administered among children in conflict with law in the state of Kerala in the 3 districts that will be further implemented in other 14 districts. The program was successful in developing a systematic approach for the psychosocial intervention of children in conflict with law that can be a forerunner for other states in India.

Keywords: psychosocial care, individual care plan, multidisciplinary, multisectoral

Procedia PDF Downloads 282
518 Relationship of Entrepreneurial Ecosystem Factors and Entrepreneurial Cognition: An Exploratory Study Applied to Regional and Metropolitan Ecosystems in New South Wales, Australia

Authors: Sumedha Weerasekara, Morgan Miles, Mark Morrison, Branka Krivokapic-Skoko

Abstract:

This paper is aimed at exploring the interrelationships among entrepreneurial ecosystem factors and entrepreneurial cognition in regional and metropolitan ecosystems. Entrepreneurial ecosystem factors examined include: culture, infrastructure, access to finance, informal networks, support services, access to universities, and the depth and breadth of the talent pool. Using a multivariate approach we explore the impact of these ecosystem factors or elements on entrepreneurial cognition. In doing so, the existing body of knowledge from the literature on entrepreneurial ecosystem and cognition have been blended to explore the relationship between entrepreneurial ecosystem factors and cognition in a way not hitherto investigated. The concept of the entrepreneurial ecosystem has received increased attention as governments, universities and communities have started to recognize the potential of integrated policies, structures, programs and processes that foster entrepreneurship activities by supporting innovation, productivity and employment growth. The notion of entrepreneurial ecosystems has evolved and grown with the advancement of theoretical research and empirical studies. Importance of incorporating external factors like culture, political environment, and the economic environment within a single framework will enhance the capacity of examining the whole systems functionality to better understand the interaction of the entrepreneurial actors and factors within a single framework. The literature on clusters underplays the role of entrepreneurs and entrepreneurial management in creating and co-creating organizations, markets, and supporting ecosystems. Entrepreneurs are only one actor following a limited set of roles and dependent upon many other factors to thrive. As a consequence, entrepreneurs and relevant authorities should be aware of the other actors and factors with which they engage and rely, and make strategic choices to achieve both self and also collective objectives. The study uses stratified random sampling method to collect survey data from 12 different regions in regional and metropolitan regions of NSW, Australia. A questionnaire was administered online among 512 Small and medium enterprise owners operating their business in selected 12 regions in NSW, Australia. Data were analyzed using descriptive analyzing techniques and partial least squares - structural equation modeling. The findings show that even though there is a significant relationship between each and every entrepreneurial ecosystem factors, there is a weak relationship between most entrepreneurial ecosystem factors and entrepreneurial cognition. In the metropolitan context, the availability of finance and informal networks have the largest impact on entrepreneurial cognition while culture, infrastructure, and support services having the smallest impact and the talent pool and universities having a moderate impact on entrepreneurial cognition. Interestingly, in a regional context, culture, availability of finance, and the talent pool have the highest impact on entrepreneurial cognition, while informal networks having the smallest impact and the remaining factors – infrastructure, universities, and support services have a moderate impact on entrepreneurial cognition. These findings suggest the need for a location-specific strategy for supporting the development of entrepreneurial cognition.

Keywords: academic achievement, colour response card, feedback

Procedia PDF Downloads 143
517 Mapping the State of the Art of European Companies Doing Social Business at the Base of the Economic Pyramid as an Advanced Form of Strategic Corporate Social Responsibility

Authors: Claudio Di Benedetto, Irene Bengo

Abstract:

The objective of the paper is to study how large European companies develop social business (SB) at the base of the economic pyramid (BoP). BoP markets are defined as the four billions people living with an annual income below $3,260 in local purchasing power. Despite they are heterogeneous in terms of geographic range they present some common characteristics: the presence of significant unmet (social) needs, high level of informal economy and the so-called ‘poverty penalty’. As a result, most people living at BoP are excluded from the value created by the global market economy. But it is worth noting, that BoP population with an aggregate purchasing power of around $5 trillion a year, represent a huge opportunity for companies that want to enhance their long-term profitability perspective. We suggest that in this context, the development of SB is, for companies, an innovative and promising way to satisfy unmet social needs and to experience new forms of value creation. Indeed, SB can be considered a strategic model to develop CSR programs that fully integrate the social dimension into the business to create economic and social value simultaneously. Despite in literature many studies have been conducted on social business, only few have explicitly analyzed such phenomenon from a company perspective and their role in the development of such initiatives remains understudied with fragmented results. To fill this gap the paper analyzes the key characteristics of the social business initiatives developed by European companies at BoP. The study was performed analyzing 1475 European companies participating in the United Nation Global Compact, the world’s leading corporate social responsibility program. Through the analysis of the corporate websites the study identifies companies that actually do SB at BoP. For SB initiatives identified, information were collected according to a framework adapted from the SB model developed by preliminary results show that more than one hundred European companies have already implemented social businesses at BoP accounting for the 6,5% of the total. This percentage increases to 15% if the focus is on companies with more than 10.440 employees. In terms of geographic distribution 80% of companies doing SB at BoP are located in western and southern Europe. The companies more active in promoting SB belong to financial sector (20%), energy sector (17%) and food and beverage sector (12%). In terms of social needs addressed almost 30% of the companies develop SB to provide access to energy and WASH, 25% of companies develop SB to reduce local unemployment or to promote local entrepreneurship and 21% of companies develop SB to promote financial inclusion of poor. In developing SB companies implement different social business configurations ranging from forms of outsourcing to internal development models. The study identifies seven main configurations through which company develops social business and each configuration present distinguishing characteristics respect to the involvement of the company in the management, the resources provided and the benefits achieved. By performing different analysis on data collected the paper provides detailed insights on how European companies develop SB at BoP.

Keywords: base of the economic pyramid, corporate social responsibility, social business, social enterprise

Procedia PDF Downloads 226
516 Wood Dust and Nanoparticle Exposure among Workers during a New Building Construction

Authors: Atin Adhikari, Aniruddha Mitra, Abbas Rashidi, Imaobong Ekpo, Jefferson Doehling, Alexis Pawlak, Shane Lewis, Jacob Schwartz

Abstract:

Building constructions in the US involve numerous wooden structures. Woods are routinely used in walls, framing floors, framing stairs, and making of landings in building constructions. Cross-laminated timbers are currently being used as construction materials for tall buildings. Numerous workers are involved in these timber based constructions, and wood dust is one of the most common occupational exposures for them. Wood dust is a complex substance composed of cellulose, polyoses and other substances. According to US OSHA, exposure to wood dust is associated with a variety of adverse health effects among workers, including dermatitis, allergic respiratory effects, mucosal and nonallergic respiratory effects, and cancers. The amount and size of particles released as wood dust differ according to the operations performed on woods. For example, shattering of wood during sanding operations produces finer particles than does chipping in sawing and milling industries. To our knowledge, how shattering, cutting and sanding of woods and wood slabs during new building construction release fine particles and nanoparticles are largely unknown. General belief is that the dust generated during timber cutting and sanding tasks are mostly large particles. Consequently, little attention has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor and conventional particle counters. This study was conducted in a large new building construction site in southern Georgia primarily during the framing of wooden side walls, inner partition walls, and landings. Exposure levels of nanoparticles (n = 10) were measured by a newly developed nanoparticle counter (TSI NanoScan SMPS Model 3910) at four different distances (5, 10, 15, and 30 m) from the work location. Other airborne particles (number of particles/m3) including PM2.5 and PM10 were monitored using a 6-channel (0.3, 0.5, 1.0, 2.5, 5.0 and 10 µm) particle counter at 15 m, 30 m, and 75 m distances at both upwind and downwind directions. Mass concentration of PM2.5 and PM10 (µg/m³) were measured by using a DustTrak Aerosol Monitor. Temperature and relative humidity levels were recorded. Wind velocity was measured by a hot wire anemometer. Concentration ranges of nanoparticles of 13 particle sizes were: 11.5 nm: 221 – 816/cm³; 15.4 nm: 696 – 1735/cm³; 20.5 nm: 879 – 1957/cm³; 27.4 nm: 1164 – 2903/cm³; 36.5 nm: 1138 – 2640/cm³; 48.7 nm: 938 – 1650/cm³; 64.9 nm: 759 – 1284/cm³; 86.6 nm: 705 – 1019/cm³; 115.5 nm: 494 – 1031/cm³; 154 nm: 417 – 806/cm³; 205.4 nm: 240 – 471/cm³; 273.8 nm: 45 – 92/cm³; and 365.2 nm: Keywords: wood dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 189
515 Geodynamic Evolution of the Tunisian Dorsal Backland (Central Mediterranean) from the Cenozoic to Present

Authors: Aymen Arfaoui, Abdelkader Soumaya, Noureddine Ben Ayed

Abstract:

The study region is located in the Tunisian Dorsal Backland (Central Mediterranean), which is the easternmost part of the Saharan Atlas mountain range, trending southwest-northeast. Based on our fieldwork, seismic tomography images, seismicity, and previous studies, we propose an interpretation of the relationship between the surface deformation and fault kinematics in the study area and the internal dynamic processes acting in the Central Mediterranean from the Cenozoic to the present. The subduction and dynamics of internal forces beneath the complicated Maghrebides mobile belt have an impact on the Tertiary and Quaternary tectonic regimes in the Pelagian and Atlassic foreland that is part of our study region. The left lateral reactivation of the major "Tunisian N-S Axis fault" and the development of a compressional relay between the Hammamet Korbous and Messella-Ressas faults are possibly a result of tectonic stresses due to the slab roll-back following the Africa/Eurasia convergence. After the slab segmentation and its eastward migration (5–4 Ma) and the formation of the Strait of Sicily "rift zone" further east, a transtensional tectonic regime has been installed in this area. According to seismic tomography images, the STEP fault of the "North-South Axis" at Hammamet-Korbous coincides with the western edge of the "Slab windows" of the Sicilian Channel and the eastern boundary of the positive anomalies attributed to the residual Slab of Tunisia. On the other hand, significant E-W Plio-Quaternary tectonic activity may be observed along the eastern portion of this STEP fault system in the Grombalia zone as a result of recent vertical lithospheric motion in response to the lateral slab migration eastward to Sicily Channel. According to SKS fast splitting directions, the upper mantle flow pattern beneath Tunisian Dorsal is parallel to the NE-SW to E-W orientation of the Shmin identified in the study area, similar to the Plio-Quaternary extensional orientation in the Central Mediterranean. Additionally, the removal of the lithosphere and the subsequent uplift of the sub-lithospheric mantle beneath the topographic highs of the Dorsal and its surroundings may be the cause of the dominant extensional to transtensional Quaternary regime. The occurrence of strike-slip and extensional seismic events in the Pelagian block reveals that the regional transtensional tectonic regime persists today. Finally, we believe that the geodynamic history of the study area since the Cenozoic is primarily influenced by the preexisting weak zones, the African slab detachment, and the upper mantle flow pattern in the central Mediterranean.

Keywords: Tunisia, lithospheric discontinuity (STEP fault), geodynamic evolution, Tunisian dorsal backland, strike-slip fault, seismic tomography, seismicity, central Mediterranean

Procedia PDF Downloads 78
514 Predicting Food Waste and Losses Reduction for Fresh Products in Modified Atmosphere Packaging

Authors: Matar Celine, Gaucel Sebastien, Gontard Nathalie, Guilbert Stephane, Guillard Valerie

Abstract:

To increase the very short shelf life of fresh fruits and vegetable, Modified Atmosphere Packaging (MAP) allows an optimal atmosphere composition to be maintained around the product and thus prevent its decay. This technology relies on the modification of internal packaging atmosphere due to equilibrium between production/consumption of gases by the respiring product and gas permeation through the packaging material. While, to the best of our knowledge, benefit of MAP for fresh fruits and vegetable has been widely demonstrated in the literature, its effect on shelf life increase has never been quantified and formalized in a clear and simple manner leading difficult to anticipate its economic and environmental benefit, notably through the decrease of food losses. Mathematical modelling of mass transfers in the food/packaging system is the basis for a better design and dimensioning of the food packaging system. But up to now, existing models did not permit to estimate food quality nor shelf life gain reached by using MAP. However, shelf life prediction is an indispensable prerequisite for quantifying the effect of MAP on food losses reduction. The objective of this work is to propose an innovative approach to predict shelf life of MAP food product and then to link it to a reduction of food losses and wastes. In this purpose, a ‘Virtual MAP modeling tool’ was developed by coupling a new predictive deterioration model (based on visual surface prediction of deterioration encompassing colour, texture and spoilage development) with models of the literature for respiration and permeation. A major input of this modelling tool is the maximal percentage of deterioration (MAD) which was assessed from dedicated consumers’ studies. Strawberries of the variety Charlotte were selected as the model food for its high perishability, high respiration rate; 50-100 ml CO₂/h/kg produced at 20°C, allowing it to be a good representative of challenging post-harvest storage. A value of 13% was determined as a limit of acceptability for the consumers, permitting to define products’ shelf life. The ‘Virtual MAP modeling tool’ was validated in isothermal conditions (5, 10 and 20°C) and in dynamic temperature conditions mimicking commercial post-harvest storage of strawberries. RMSE values were systematically lower than 3% for respectively, O₂, CO₂ and deterioration profiles as a function of time confirming the goodness of model fitting. For the investigated temperature profile, a shelf life gain of 0.33 days was obtained in MAP compared to the conventional storage situation (no MAP condition). Shelf life gain of more than 1 day could be obtained for optimized post-harvest conditions as numerically investigated. Such shelf life gain permitted to anticipate a significant reduction of food losses at the distribution and consumer steps. This food losses' reduction as a function of shelf life gain has been quantified using a dedicated mathematical equation that has been developed for this purpose.

Keywords: food losses and wastes, modified atmosphere packaging, mathematical modeling, shelf life prediction

Procedia PDF Downloads 182
513 Determination of Gross Alpha and Gross Beta Activity in Water Samples by iSolo Alpha/Beta Counting System

Authors: Thiwanka Weerakkody, Lakmali Handagiripathira, Poshitha Dabare, Thisari Guruge

Abstract:

The determination of gross alpha and beta activity in water is important in a wide array of environmental studies and these parameters are considered in international legislations on the quality of water. This technique is commonly applied as screening method in radioecology, environmental monitoring, industrial applications, etc. Measuring of Gross Alpha and Beta emitters by using iSolo alpha beta counting system is an adequate nuclear technique to assess radioactivity levels in natural and waste water samples due to its simplicity and low cost compared with the other methods. Twelve water samples (Six samples of commercially available bottled drinking water and six samples of industrial waste water) were measured by standard method EPA 900.0 consisting of the gas-less, firm wear based, single sample, manual iSolo alpha beta counter (Model: SOLO300G) with solid state silicon PIPS detector. Am-241 and Sr90/ Y90 calibration standards were used to calibrate the detector. The minimum detectable activities are 2.32mBq/L and 406mBq/L, for alpha and beta activity, respectively. Each of the 2L water samples was evaporated (at low heat) to a small volume and transferred into 50mm stainless steel counting planchet evenly (for homogenization) and heated by IR lamp and the constant weighted residue was obtained. Then the samples were counted for gross alpha and beta. Sample density on the planchet area was maintained below 5mg/cm. Large quantities of solid wastes sludges and waste water are generated every year due to various industries. This water can be reused for different applications. Therefore implementation of water treatment plants and measuring water quality parameters in industrial waste water discharge is very important before releasing them into the environment. This waste may contain different types of pollutants, including radioactive substances. All these measured waste water samples having gross alpha and beta activities, lower than the maximum tolerance limits for industrial waste water discharge of industrial waste in to inland surface water, that is 10-9µCi/mL and 10-8µCi/mL for gross alpha and beta respectively (National Environmental Act, No. 47 of 1980). This is according to extraordinary gazette of the democratic socialist republic of Sri Lanka in February 2008. The measured water samples were below the recommended radioactivity levels and do not pose any radiological hazard when releasing the environment. Drinking water is an essential requirement of life. All the drinking water samples were below the permissible levels of 0.5Bq/L for gross alpha activity and 1Bq/L for gross beta activity. The values have been proposed by World Health Organization in 2011; therefore the water is acceptable for consumption of humans without any further clarification with respect to their radioactivity. As these screening levels are very low, the individual dose criterion (IDC) would usually not be exceeded (0.1mSv y⁻¹). IDC is a criterion for evaluating health risks from long term exposure to radionuclides in drinking water. Recommended level of 0.1mSv/y expressed a very low level of health risk. This monitoring work will be continued further for environmental protection purposes.

Keywords: drinking water, gross alpha, gross beta, waste water

Procedia PDF Downloads 198
512 Technological Challenges for First Responders in Civil Protection; the RESPOND-A Solution

Authors: Georgios Boustras, Cleo Varianou Mikellidou, Christos Argyropoulos

Abstract:

Summer 2021 was marked by a number of prolific fires in the EU (Greece, Cyprus, France) as well as outside the EU (USA, Turkey, Israel). This series of dramatic events have stretched national civil protection systems and first responders in particular. Despite the introduction of National, Regional and International frameworks (e.g. rescEU), a number of challenges have arisen, not only related to climate change. RESPOND-A (funded by the European Commission by Horizon 2020, Contract Number 883371) introduces a unique five-tier project architectural structure for best associating modern telecommunications technology with novel practices for First Responders of saving lives, while safeguarding themselves, more effectively and efficiently. The introduced architecture includes Perception, Network, Processing, Comprehension, and User Interface layers, which can be flexibly elaborated to support multiple levels and types of customization, so, the intended technologies and practices can adapt to any European Environment Agency (EEA)-type disaster scenario. During the preparation of the RESPOND-A proposal, some of our First Responder Partners expressed the need for an information management system that could boost existing emergency response tools, while some others envisioned a complete end-to-end network management system that would offer high Situational Awareness, Early Warning and Risk Mitigation capabilities. The intuition behind these needs and visions sits on the long-term experience of these Responders, as well, their smoldering worry that the evolving threat of climate change and the consequences of industrial accidents will become more frequent and severe. Three large-scale pilot studies are planned in order to illustrate the capabilities of the RESPOND-A system. The first pilot study will focus on the deployment and operation of all available technologies for continuous communications, enhanced Situational Awareness and improved health and safety conditions for First Responders, according to a big fire scenario in a Wildland Urban Interface zone (WUI). An important issue will be examined during the second pilot study. Unobstructed communication in the form of the flow of information is severely affected during a crisis; the flow of information between the wider public, from the first responders to the public and vice versa. Call centers are flooded with requests and communication is compromised or it breaks down on many occasions, which affects in turn – the effort to build a common operations picture for all firstr esponders. At the same time the information that reaches from the public to the operational centers is scarce, especially in the aftermath of an incident. Understandably traffic if disrupted leaves no other way to observe but only via aerial means, in order to perform rapid area surveys. Results and work in progress will be presented in detail and challenges in relation to civil protection will be discussed.

Keywords: first responders, safety, civil protection, new technologies

Procedia PDF Downloads 142
511 Implementation of Project-Based Learning with Peer Assessment in Large Classes under Consideration of Faculty’s Scare Resources

Authors: Margit Kastner

Abstract:

To overcome the negative consequences associated with large class sizes and to support students in developing the necessary competences (e.g., critical thinking, problem-solving, or team-work skills) a marketing course has been redesigned by implementing project-based learning with peer assessment (PBL&PA). This means that students can voluntarily take advantage of this supplementary offer and explore -in addition to attending the lecture where clicker questions are asked- a real-world problem, find a solution, and assess the results of peers while working in small collaborative groups. In order to handle this with little further effort, the process is technically supported by the university’s e-learning system in such a way that students upload their solution in form of an assignment which is then automatically distributed to peer groups who have to assess the work of three other groups. Finally, students’ work is graded automatically considering both, students’ contribution to the project and the conformity of the peer assessment. The purpose of this study is to evaluate students’ perception of PBL&PA using an online-questionnaire to collect the data. More specifically, it aims to discover students’ motivations for (not) working on a project and the benefits and problems students encounter. In addition to the survey, students’ performance was analyzed by comparing the final grades of those who participated in PBL&PA with those who did not participate. Among the 260 students who filled out the questionnaire, 47% participated in PBL&PA. Besides extrinsic motivations (bonus credits), students’ participation was often motivated by learning and social benefits. Reasons for not working on a project were connected to students’ organization and management of their studies (e.g., time constraints, no/wrong information) and teamwork concerns (e.g., missing engagement of peers, prior negative experiences). In addition, high workload and insufficient extrinsic motivation (bonus credits) were mentioned. With regards to benefits and problems students encountered during the project, students provided more positive than negative comments. Positive aspects most often stated were learning and social benefits while negative ones were mainly attached to the technical implementation. Interestingly, bonus credits were hardly named as a positive aspect meaning that intrinsic motivations have become more important when working on the project. Team aspects generated mixed feelings. In addition, students who voluntarily participated in PBL&PA were, in general, more active and utilized further course offers such as clicker questions. Examining students’ performance at the final exam revealed that students without participating in any of the offered active learning tasks performed poorest in the exam while students who used all activities were best. In conclusion, the goals of the implementation were met in terms of students’ perceived benefits and the positive impact on students’ exam performance. Since the comparison of the automatic grading with faculty grading showed valid results, it is possible to rely only on automatic grading in the future. That way, the additional workload for faculty will be within limits. Thus, the implementation of project-based learning with peer assessment can be recommended for large classes.

Keywords: automated grading, large classes, peer assessment, project-based learning

Procedia PDF Downloads 165
510 The Impact of Professional Development on Teachers’ Instructional Practice

Authors: Karen Koellner, Nanette Seago, Jennifer Jacobs, Helen Garnier

Abstract:

Although studies of teacher professional development (PD) are prevalent, surprisingly most have only produced incremental shifts in teachers’ learning and their impact on students. There is a critical need to understand what teachers take up and use in their classroom practice after attending PD and why we often do not see greater changes in learning and practice. This paper is based on a mixed methods efficacy study of the Learning and Teaching Geometry (LTG) video-based mathematics professional development materials. The extent to which the materials produce a beneficial impact on teachers’ mathematics knowledge, classroom practices, and their students’ knowledge in the domain of geometry through a group-randomized experimental design are considered. In this study, we examine a small group of teachers to better understand their interpretations of the workshops and their classroom uptake. The participants included 103 secondary mathematics teachers serving grades 6-12 from two states in different regions. Randomization was conducted at the school level, with 23 schools and 49 teachers assigned to the treatment group and 18 schools and 54 teachers assigned to the comparison group. The case study examination included twelve treatment teachers. PD workshops for treatment teachers began in Summer 2016. Nine full days of professional development were offered to teachers, beginning with the one-week institute (Summer 2016) and four days of PD throughout the academic year. The same facilitator-led all of the workshops, after completing a facilitator preparation process that included a multi-faceted assessment of fidelity. The overall impact of the LTG PD program was assessed from multiple sources: two teacher content assessments, two PD embedded assessments, pre-post-post videotaped classroom observations, and student assessments. Additional data was collected from the case study teachers including additional videotaped classroom observations and interviews. Repeated measures ANOVA analyses were used to detect patterns of change in the treatment teachers’ content knowledge before and after completion of the LTG PD, relative to the comparison group. No significant effects were found across the two groups of teachers on the two teacher content assessments. Teachers were rated on the quality of their mathematics instruction captured in videotaped classroom observations using the Math in Common Observation Protocol. On average, teachers who attended the LTG PD intervention improved their ability to engage students in mathematical reasoning and to provide accurate, coherent, and well-justified mathematical content. In addition, the LTG PD intervention and instruction that engaged students in mathematical practices both positively and significantly predicted greater student knowledge gains. Teacher knowledge was not a significant predictor. Twelve treatment teachers were self-selected to serve as case study teachers to provide additional videotapes in which they felt they were using something from the PD they learned and experienced. Project staff analyzed the videos, compared them to previous videos and interviewed the teachers regarding their uptake of the PD related to content knowledge, pedagogical knowledge and resources used.

Keywords: teacher learning, professional development, pedagogical content knowledge, geometry

Procedia PDF Downloads 169
509 Numerical Study of Homogeneous Nanodroplet Growth

Authors: S. B. Q. Tran

Abstract:

Drop condensation is the phenomenon that the tiny drops form when the oversaturated vapour present in the environment condenses on a substrate and makes the droplet growth. Recently, this subject has received much attention due to its applications in many fields such as thin film growth, heat transfer, recovery of atmospheric water and polymer templating. In literature, many papers investigated theoretically and experimentally in macro droplet growth with the size of millimeter scale of radius. However few papers about nanodroplet condensation are found in the literature especially theoretical work. In order to understand the droplet growth in nanoscale, we perform the numerical simulation work to study nanodroplet growth. We investigate and discuss the role of the droplet shape and monomer diffusion on drop growth and their effect on growth law. The effect of droplet shape is studied by doing parametric studies of contact angle and disjoining pressure magnitude. Besides, the effect of pinning and de-pinning behaviours is also studied. We investigate the axisymmetric homogeneous growth of 10–100 nm single water nanodroplet on a substrate surface. The main mechanism of droplet growth is attributed to the accumulation of laterally diffusing water monomers, formed by the absorption of water vapour in the environment onto the substrate. Under assumptions of quasi-steady thermodynamic equilibrium, the nanodroplet evolves according to the augmented Young–Laplace equation. Using continuum theory, we model the dynamics of nanodroplet growth including the coupled effects of disjoining pressure, contact angle and monomer diffusion with the assumption of constant flux of water monomers at the far field. The simulation result is validated by comparing with the published experimental result. For the case of nanodroplet growth with constant contact angle, our numerical results show that the initial droplet growth is transient by monomer diffusion. When the flux at the far field is small, at the beginning, the droplet grows by the diffusion of initially available water monomers on the substrate and after that by the flux at the far field. In the steady late growth rate of droplet radius and droplet height follow a power law of 1/3, which is unaffected by the substrate disjoining pressure and contact angle. However, it is found that the droplet grows faster in radial direction than high direction when disjoining pressure and contact angle increase. The simulation also shows the information of computational domain effect in the transient growth period. When the computational domain size is larger, the mass coming in the free substrate domain is higher. So the mass coming in the droplet is also higher. The droplet grows and reaches the steady state faster. For the case of pinning and de-pinning droplet growth, the simulation shows that the disjoining pressure does not affect the droplet radius growth law 1/3 in steady state. However the disjoining pressure modifies the growth rate of the droplet height, which then follows a power law of 1/4. We demonstrate how spatial depletion of monomers could lead to a growth arrest of the nanodroplet, as observed experimentally.

Keywords: augmented young-laplace equation, contact angle, disjoining pressure, nanodroplet growth

Procedia PDF Downloads 272
508 Ethno-Philosophy: A Caring Approach to Research and Therapy in Humanities

Authors: Tammy Shel (Aboody)

Abstract:

The integration of philosophy with ethnography, i.e., ethno-philosophy, or any qualitative method, is multi-dimensional. It is, thus, vital to the discourse on caring in the philosophy of education, and in therapy. These two significant dimensions are focal in this proposal’s discussion. The integration of grounded data with philosophy can shed light on cultural, gender, socio-economic and political diversities in the relationships and interactions between and among individuals and societies. This approach can explain miscommunication and, eventually, violent conflicts. The ethno-philosophy study in this proposal focuses on the term caring, through case studies of 5 non-white male and female elementary school teachers in Los Angeles County. The study examined the teachers’ views on caring and, consequently, the implications on their pedagogy. Subsequently, this method turned out to also be a caring approach in therapy. Ethnographic data was juxtaposed with western philosophy. Research discussion unraveled transformable gaps between western patriarchal and feminist philosophy on caring, and that of the teachers. Multiple interpretations and practices of caring were found due to cultural, gender, and socio-economic-political differences. Likewise, two dominant categories emerged. The first is inclusive caring, which is perceived as an ideal, as the compass of humanity that aims towards emancipation from the shackles of inner and external violence. The second is tribal caring, which illuminates the inherently dialectical substantial diversity in the interpretations and praxes of caring. Such angles are absent or minor in traditional western literature. Both categories teach of the incessant dynamic definition of caring, and its subliminal and repressed mechanisms. The multi-cultural aspects can teach us, however, that despite the inclusive common ground we share on caring, and despite personal and social awareness of cultural and gender differences, the hegemonic ruling-class governs the standardized conventional interpretation of caring. Second is the dimension of therapy in ethno-philosophy. Each patient is like a case study per se, and is a self-ethnographer. Thus, the patient is the self-observer and data collector, and the therapist is the philosopher who helps deconstruct into fragments the consciousness that comprises our well-being and self-esteem and acceptance. Together, they both identify and confront hurdles that hinder the pursuit of a more composed attitude towards ourselves and others. Together, they study and re-organize these fragments into a more comprehensible and composed self-acceptance. Therefore, the ethno-philosophy method, which stems from a caring approach, confronts the internal and external conflicts that govern our relationships with others. It sheds light on the dark and subliminal spots in our minds and hearts that operate us. Unveiling the hidden spots helps identify a shared ground that can supersede miscommunication and conflicts among and between people. The juxtaposition of ethnography with philosophy, as a caring approach in education and therapy, emphasizes that planet earth is like a web. Hence, despite the common mechanism that stimulates a caring approach towards the other, ethno-philosophy can help undermine the ruling patriarchal oppressive forces that define and standardize caring relationships, and to subsequently bridge gaps between people.

Keywords: caring, philosophy of education, ethnography, therapy, research

Procedia PDF Downloads 124
507 Exploring Fluoroquinolone-Resistance Dynamics Using a Distinct in Vitro Fermentation Chicken Caeca Model

Authors: Bello Gonzalez T. D. J., Setten Van M., Essen Van A., Brouwer M., Veldman K. T.

Abstract:

Resistance to fluoroquinolones (FQ) has evolved increasingly over the years, posing a significant challenge for the treatment of human infections, particularly gastrointestinal tract infections caused by zoonotic bacteria transmitted through the food chain and environment. In broiler chickens, a relatively high proportion of FQ resistance has been observed in Escherichia coli indicator, Salmonella and Campylobacter isolates. We hypothesize that flumequine (Flu), used as a secondary choice for the treatment of poultry infections, could potentially be associated with a high proportion of FQ resistance. To evaluate this hypothesis, we used an in vitro fermentation chicken caeca model. Two continuous single-stage fermenters were used to simulate in real time the physiological conditions of the chicken caeca microbial content (temperature, pH, caecal content mixing, and anoxic environment). A pool of chicken caecal content containing FQ-resistant E. coli obtained from chickens at slaughter age was used as inoculum along with a spiked FQ-susceptible Campylobacter jejuni strain isolated from broilers. Flu was added to one of the fermenters (Flu-fermenter) every 24 hours for two days to evaluate the selection and maintenance of FQ resistance over time, while the other served as a control (C-Fermenter). The experiment duration was 5 days. Samples were collected at three different time points: before, during and after Flu administration. Serial dilutions were plated on Butzler culture media with and without Flu (8mg/L) and enrofloxacin (4mg/L) and on MacConkey culture media with and without Flu (4mg/L) and enrofloxacin (1mg/L) to determine the proportion of resistant strains over time. Positive cultures were identified by mass spectrometry and matrix-assisted laser desorption/ionization (MALDI). A subset of the obtained isolates were used for Whole Genome Sequencing analysis. Over time, E. coli exhibited positive growth in both fermenters, while C. jejuni growth was detected up to day 3. The proportion of Flu-resistant E. coli strains recovered remained consistent over time after antibiotic selective pressure, while in the C-fermenter, a decrease was observed at day 5; a similar pattern was observed in the enrofloxacin-resistant E. coli strains. This suggests that Flu might play a role in the selection and persistence of enrofloxacin resistance, compared to C-fermenter, where enrofloxacin-resistant E. coli strains appear at a later time. Furthermore, positive growth was detected from both fermenters only on Butzler plates without antibiotics. A subset of C. jejuni strains from the Flu-fermenter revealed that those strains were susceptible to ciprofloxacin (MIC < 0.12 μg/mL). A selection of E. coli strains from both fermenters revealed the presence of plasmid-mediated quinolone resistance (PMQR) (qnr-B19) in only one strain from the C-fermenter belonging to sequence type (ST) 48, and in all from Flu-fermenter belonged to ST189. Our results showed that Flu selective impact on PMQR-positive E. coli strains, while no effect was observed in C. jejuni. Maintenance of Flu-resistance was correlated with antibiotic selective pressure. Further studies into antibiotic resistance gene transfer among commensal and zoonotic bacteria in the chicken caeca content may help to elucidate the resistance spread mechanisms.

Keywords: fluoroquinolone-resistance, escherichia coli, campylobacter jejuni, in vitro model

Procedia PDF Downloads 62
506 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 165
505 A Qualitative Study of Newspaper Discourse and Online Discussions of Climate Change in China

Authors: Juan Du

Abstract:

Climate change is one of the most crucial issues of this era, with contentious debates on it among scholars. But there are sparse studies on climate change discourse in China. Including China in the study of climate change is essential for a sociological understanding of climate change. China -- as a developing country and an essential player in tackling climate change -- offers an ideal case for studying climate change for scholars moving beyond developed countries and enriching their understandings of climate change by including diverse social settings. This project contrasts the macro- and micro-level understandings of climate change in China, which helps scholars move beyond a focus on climate skepticism and denialism and enriches sociology of climate change knowledge. The macro-level understanding of climate change is obtained by analyzing over 4,000 newspaper articles from various official outlets in China. State-controlled newspapers play an essential role in transmitting essential and high-quality information and promoting broader public understanding of climate change and its anthropogenic nature. Thus, newspaper articles can be seen as tools employed by governments to mobilize the public in terms of supporting the development of a strategy shift from economy-growth to an ecological civilization. However, media is just one of the significant factors influencing an individual’s climate change concern. Extreme weather events, access to accurate scientific information, elite cues, and movement/countermovement advocacy influence an individual’s perceptions of climate change. Hence, there are differences in the ways that both newspaper articles and the public frame the issues. The online forum is an informative channel for scholars to understand the public’s opinion. The micro-level data comes from Zhihu, which is China’s equivalence of Quora. Users can propose, answer, and comment on questions. This project analyzes the questions related to climate change which have over 20 answers. By open-coding both the macro- and micro-level data, this project will depict the differences between ideology as presented in government-controlled newspapers and how people talk and act with respect to climate change in cyberspace, which may provide an idea about any existing disconnect in public behavior and their willingness to change daily activities to facilitate a greener society. The contemporary Yellow Vest protests in France illustrate that the large gap between governmental policies of climate change mitigation and the public’s understanding may lead to social movement activity and social instability. Effective environmental policy is impossible without the public’s support. Finding existing gaps in understanding may help policy-makers develop effective ways of framing climate change and obtain more supporters of climate change related policies. Overall, this qualitative project provides answers to the following research questions: 1) How do different state-controlled newspapers transmit their ideology on climate change to the public and in what ways? 2) How do individuals frame climate change online? 3) What are the differences between newspapers’ framing and individual’s framing?

Keywords: climate change, China, framing theory, media, public’s climate change concern

Procedia PDF Downloads 131