Search results for: imaging sensitivity measurement
555 The Interactive Effects among Supervisor Support, Academic Emotion, and Positive Mental Health: An Evidence Based on Longitudinal Cross-Lagged Panel Data Analysis on Postgraduates in China
Authors: Jianzhou Ni, Hua Fan
Abstract:
It has been determined that supervisor support has a major influence on postgraduate students' academic emotions and is considered a method of successfully anticipating postgraduates' good psychological well-being levels. As a result, by assessing the mediating influence upon academic emotions for contemporary postgraduates in China, this study investigated the tight reciprocal relationship between psychological empowerment and positive mental well-being among postgraduates. To that end, a help enables a theoretical analysis of role clarity, academic emotion, and positive psychological health was developed, and its validity and reliability were demonstrated for the first time using the normalized postgrad relationship with supervisor scale, academic emotion scale, and positive mental scale, as well as questionnaire data from Chinese postgraduate students. This study used the cross-lagged (ARCL) panel model data to longitudinally measure 798 valid data from two survey questions polls taken in 2019 (T1) and 2021 (T2) to investigate the link between supervisor support and positive graduate student mental well-being in a bidirectional relationship of influence. The study discovered that mentor assistance could have a considerable beneficial impact on graduate students' academic emotions and, as a result, indirectly help learners attain positive mental health development. This verifies the theoretical premise that academic emotions partially mediate the effect of mentor support on positive mental health development and argues for the coexistence of the two. The outcomes of this study can help researchers gain a better knowledge of the dynamic interplay among three different research variables: supervisor support, academic emotions, and positive mental health, as well as fill gaps in previous research. In this regard, the study indicated that mentor assistance directly stimulates students' academic drive and assists graduate students in developing good academic emotions, which contributes to the development of positive mental health. However, given the restricted measurement time in this study's cross-lagged panel data and the potential effect of moderating effects other than academic mood on graduate students' good mental health, the results of this study need to be more fully understood and validated.Keywords: supervisor support, academic emotions, positive mental health, interaction effects, longitudinal cross-lagged measurements
Procedia PDF Downloads 85554 Intriguing Modulations in the Excited State Intramolecular Proton Transfer Process of Chrysazine Governed by Host-Guest Interactions with Macrocyclic Molecules
Authors: Poojan Gharat, Haridas Pal, Sharmistha Dutta Choudhury
Abstract:
Tuning photophysical properties of guest dyes through host-guest interactions involving macrocyclic hosts are the attractive research areas since past few decades, as these changes can directly be implemented in chemical sensing, molecular recognition, fluorescence imaging and dye laser applications. Excited state intramolecular proton transfer (ESIPT) is an intramolecular prototautomerization process display by some specific dyes. The process is quite amenable to tunability by the presence of different macrocyclic hosts. The present study explores the interesting effect of p-sulfonatocalix[n]arene (SCXn) and cyclodextrin (CD) hosts on the excited-state prototautomeric equilibrium of Chrysazine (CZ), a model antitumour drug. CZ exists exclusively in its normal form (N) in the ground state. However, in the excited state, the excited N* form undergoes ESIPT along with its pre-existing intramolecular hydrogen bonds, giving the excited state prototautomer (T*). Accordingly, CZ shows a single absorption band due to N form, but two emission bands due to N* and T* forms. Facile prototautomerization of CZ is considerably inhibited when the dye gets bound to SCXn hosts. However, in spite of lower binding affinity, the inhibition is more profound with SCX6 host as compared to SCX4 host. For CD-CZ system, while prototautomerization process is hindered by the presence of β-CD, it remains unaffected in the presence of γCD. Reduction in the prototautomerization process of CZ by SCXn and βCD hosts is unusual, because T* form is less dipolar in nature than the N*, hence binding of CZ within relatively hydrophobic hosts cavities should have enhanced the prototautomerization process. At the same time, considering the similar chemical nature of two CD hosts, their effect on prototautomerization process of CZ would have also been similar. The atypical effects on the prototautomerization process of CZ by the studied hosts are suggested to arise due to the partial inclusion or external binding of CZ with the hosts. As a result, there is a strong possibility of intermolecular H-bonding interaction between CZ dye and the functional groups present at the portals of SCXn and βCD hosts. Formation of these intermolecular H-bonds effectively causes the pre-existing intramolecular H-bonding network within CZ molecule to become weak, and this consequently reduces the prototautomerization process for the dye. Our results suggest that rather than the binding affinity between the dye and host, it is the orientation of CZ in the case of SCXn-CZ complexes and the binding stoichiometry in the case of CD-CZ complexes that play the predominant role in influencing the prototautomeric equilibrium of the dye CZ. In the case of SCXn-CZ complexes, the results obtained through experimental findings are well supported by quantum chemical calculations. Similarly for CD-CZ systems, binding stoichiometries obtained through geometry optimization studies on the complexes between CZ and CD hosts correlate nicely with the experimental results. Formation of βCD-CZ complexes with 1:1 stoichiometry while formation of γCD-CZ complexes with 1:1, 1:2 and 2:2 stoichiometries are revealed from geometry optimization studies and these results are in good accordance with the observed effects by the βCD and γCD hosts on the ESIPT process of CZ dye.Keywords: intermolecular proton transfer, macrocyclic hosts, quantum chemical studies, photophysical studies
Procedia PDF Downloads 119553 Investigation of Several New Ionic Liquids’ Behaviour during ²¹⁰PB/²¹⁰BI Cherenkov Counting in Waters
Authors: Nataša Todorović, Jovana Nikolov, Ivana Stojković, Milan Vraneš, Jovana Panić, Slobodan Gadžurić
Abstract:
The detection of ²¹⁰Pb levels in aquatic environments evokes interest in various scientific studies. Its precise determination is important not only for the radiological assessment of drinking waters but also ²¹⁰Pb, and ²¹⁰Po distribution in the marine environment are significant for the assessment of the removal rates of particles from the ocean and particle fluxes during transport along the coast, as well as particulate organic carbon export in the upper ocean. Measurement techniques for ²¹⁰Pb determination, gamma spectrometry, alpha spectrometry, or liquid scintillation counting (LSC) are either time-consuming or demand expensive equipment or complicated chemical pre-treatments. However, one other possibility is to measure ²¹⁰Pb on an LS counter if it is in equilibrium with its progeny ²¹⁰Bi - through the Cherenkov counting method. It is unaffected by the chemical quenching and assumes easy sample preparation but has the drawback of lower counting efficiencies than standard LSC methods, typically from 10% up to 20%. The aim of the presented research in this paper is to investigate the possible increment of detection efficiency of Cherenkov counting during ²¹⁰Pb/²¹⁰Bi detection on an LS counter Quantulus 1220. Considering naturally low levels of ²¹⁰Pb in aqueous samples, the addition of ionic liquids to the counting vials with the analysed samples has the benefit of detection limit’s decrement during ²¹⁰Pb quantification. Our results demonstrated that ionic liquid, 1-butyl-3-methylimidazolium salicylate, is more efficient in Cherenkov counting efficiency increment than the previously explored 2-hydroxypropan-1-amminium salicylate. Consequently, the impact of a few other ionic liquids that were synthesized with the same cation group (1-butyl-3-methylimidazolium benzoate, 1-butyl-3-methylimidazolium 3-hydroxybenzoate, and 1-butyl-3-methylimidazolium 4-hydroxybenzoate) was explored in order to test their potential influence on Cherenkov counting efficiency. It was confirmed that, among the explored ones, only ionic liquids in the form of salicylates exhibit a wavelength shifting effect. Namely, the addition of small amounts (around 0.8 g) of 1-butyl-3-methylimidazolium salicylate increases the detection efficiency from 16% to >70%, consequently reducing the detection threshold by more than four times. Moreover, the addition of ionic liquids could find application in the quantification of other radionuclides besides ²¹⁰Pb/²¹⁰Bi via Cherenkov counting method.Keywords: liquid scintillation counting, ionic liquids, Cherenkov counting, ²¹⁰PB/²¹⁰BI in water
Procedia PDF Downloads 100552 Fe3O4 Decorated ZnO Nanocomposite Particle System for Waste Water Remediation: An Absorptive-Photocatalytic Based Approach
Authors: Prateek Goyal, Archini Paruthi, Superb K. Misra
Abstract:
Contamination of water resources has been a major concern, which has drawn attention to the need to develop new material models for treatment of effluents. Existing conventional waste water treatment methods remain ineffective sometimes and uneconomical in terms of remediating contaminants like heavy metal ions (mercury, arsenic, lead, cadmium and chromium); organic matter (dyes, chlorinated solvents) and high salt concentration, which makes water unfit for consumption. We believe that nanotechnology based strategy, where we use nanoparticles as a tool to remediate a class of pollutants would prove to be effective due to its property of high surface area to volume ratio, higher selectivity, sensitivity and affinity. In recent years, scientific advancement has been made to study the application of photocatalytic (ZnO, TiO2 etc.) nanomaterials and magnetic nanomaterials in remediating contaminants (like heavy metals and organic dyes) from water/wastewater. Our study focuses on the synthesis and monitoring remediation efficiency of ZnO, Fe3O4 and Fe3O4 coated ZnO nanoparticulate system for the removal of heavy metals and dyes simultaneously. Multitude of ZnO nanostructures (spheres, rods and flowers) using multiple routes (microwave & hydrothermal approach) offers a wide range of light active photo catalytic property. The phase purity, morphology, size distribution, zeta potential, surface area and porosity in addition to the magnetic susceptibility of the particles were characterized by XRD, TEM, CPS, DLS, BET and VSM measurements respectively. Further on, the introduction of crystalline defects into ZnO nanostructures can also assist in light activation for improved dye degradation. Band gap of a material and its absorbance is a concrete indicator for photocatalytic activity of the material. Due to high surface area, high porosity and affinity towards metal ions and availability of active surface sites, iron oxide nanoparticles show promising application in adsorption of heavy metal ions. An additional advantage of having magnetic based nanocomposite is, it offers magnetic field responsive separation and recovery of the catalyst. Therefore, we believe that ZnO linked Fe3O4 nanosystem would be efficient and reusable. Improved photocatalytic efficiency in addition to adsorption for environmental remediation has been a long standing challenge, and the nano-composite system offers the best of features which the two individual metal oxides provide for nanoremediation.Keywords: adsorption, nanocomposite, nanoremediation, photocatalysis
Procedia PDF Downloads 236551 Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind Systems
Authors: Fatemeh Deldarabdolmaleki, Mohamad Fakri Zaky Bin Ja'afar
Abstract:
This paper presents fenestration analysis to study the balance between utilizing daylight and eliminating the disturbing parameters in a private office room with interior venetian blinds taking into account different slat angles. Mean luminance of the scene and window, luminance ratio of the workplane and window, work plane illumination and daylight glare probability(DGP) were calculated as a function of venetian blind design properties. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based evalglare and hdrscope help to investigate luminance-based metrics. A total of Eight-day measurement experiment was conducted to investigate the impact of different venetian blind angles in an office environment under daylight condition in Serdang, Malaysia. Detailed result for the selected case study showed that artificial lighting is necessary during the morning session for Malaysian buildings with southwest windows regardless of the venetian blind’s slat angle. However, in some conditions of afternoon session the workplane illuminance level exceeds the maximum illuminance of 2000 lx such as 10° and 40° slat angles. Generally, a rising trend is discovered toward mean window luminance level during the day. All the conditions have less than 10% of the pixels exceeding 2000 cd/m² before 1:00 P.M. However, 40% of the selected hours have more than 10% of the scene pixels higher than 2000 cd/m² after 1:00 P.M. Surprisingly in no blind condition, there is no extreme case of window/task ratio, However, the extreme cases happen for 20°, 30°, 40° and 50° slat angles. As expected mean window luminance level is higher than 2000 cd/m² after 2:00 P.M for most cases except 60° slat angle condition. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment, due to the window’s direction, location of the building and studied workplane. Specifically, this paper reviews different blind angle’s response to the suggested metrics by the previous standards, and finally conclusions and knowledge gaps are summarized and suggested next steps for research are provided. Addressing these gaps is critical for the continued progress of the energy efficiency movement.Keywords: daylighting, office environment, energy simulation, venetian blind
Procedia PDF Downloads 226550 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 103549 Microstructure and Mechanical Properties Evaluation of Graphene-Reinforced AlSi10Mg Matrix Composite Produced by Powder Bed Fusion Process
Authors: Jitendar Kumar Tiwari, Ajay Mandal, N. Sathish, A. K. Srivastava
Abstract:
Since the last decade, graphene achieved great attention toward the progress of multifunction metal matrix composites, which are highly demanded in industries to develop energy-efficient systems. This study covers the two advanced aspects of the latest scientific endeavor, i.e., graphene as reinforcement in metallic materials and additive manufacturing (AM) as a processing technology. Herein, high-quality graphene and AlSi10Mg powder mechanically mixed by very low energy ball milling with 0.1 wt. % and 0.2 wt. % graphene. Mixed powder directly subjected to the powder bed fusion process, i.e., an AM technique to produce composite samples along with bare counterpart. The effects of graphene on porosity, microstructure, and mechanical properties were examined in this study. The volumetric distribution of pores was observed under X-ray computed tomography (CT). On the basis of relative density measurement by X-ray CT, it was observed that porosity increases after graphene addition, and pore morphology also transformed from spherical pores to enlarged flaky pores due to improper melting of composite powder. Furthermore, the microstructure suggests the grain refinement after graphene addition. The columnar grains were able to cross the melt pool boundaries in case of the bare sample, unlike composite samples. The smaller columnar grains were formed in composites due to heterogeneous nucleation by graphene platelets during solidification. The tensile properties get affected due to induced porosity irrespective of graphene reinforcement. The optimized tensile properties were achieved at 0.1 wt. % graphene. The increment in yield strength and ultimate tensile strength was 22% and 10%, respectively, for 0.1 wt. % graphene reinforced sample in comparison to bare counterpart while elongation decreases 20% for the same sample. The hardness indentations were taken mostly on the solid region in order to avoid the collapse of the pores. The hardness of the composite was increased progressively with graphene content. Around 30% of increment in hardness was achieved after the addition of 0.2 wt. % graphene. Therefore, it can be concluded that powder bed fusion can be adopted as a suitable technique to develop graphene reinforced AlSi10Mg composite. Though, some further process modification required to avoid the induced porosity after the addition of graphene, which can be addressed in future work.Keywords: graphene, hardness, porosity, powder bed fusion, tensile properties
Procedia PDF Downloads 126548 The Impact of Physical Exercise on Gestational Diabetes and Maternal Weight Management: A Meta-Analysis
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Physiological changes during pregnancy, such as alterations in the circulatory, respiratory, and musculoskeletal systems, can negatively impact daily physical activity. This reduced activity is often associated with an increased risk of adverse maternal health outcomes, particularly gestational diabetes mellitus (GDM) and excessive weight gain. This meta-analysis aims to evaluate the effectiveness of structured physical exercise interventions during pregnancy in reducing the risk of GDM and managing maternal weight gain. A comprehensive search was conducted across six major databases: PubMed, Cochrane Library, EMBASE, Web of Science, ScienceDirect, and ClinicalTrials.gov, covering the period from database inception until 2023. Randomized controlled trials (RCTs) that explored the effects of physical exercise programs on pregnant women with low physical activity levels were included. The search was performed using EndNote and results were managed using RevMan (Review Manager) for meta-analysis. RCTs involving healthy pregnant women with low levels of physical activity or sedentary lifestyles were selected. These RCTs must have incorporated structured exercise programs during pregnancy and reported on outcomes related to GDM and maternal weight gain. From an initial pool of 5,112 articles, 65 RCTs (involving 11,400 pregnant women) met the inclusion criteria. Data extraction was performed, followed by a quality assessment of the selected studies using the Cochrane Risk of Bias tool. The meta-analysis was conducted using RevMan software, where pooled relative risks (RR) and weighted mean differences (WMD) were calculated using a random-effects model to address heterogeneity across studies. Sensitivity analyses, subgroup analyses (based on factors such as exercise intensity, duration, and pregnancy stage), and publication bias assessments were also conducted. Structured physical exercise during pregnancy led to a significant reduction in the risk of developing GDM (RR = 0.68; P < 0.001), particularly when the exercise program was performed throughout the pregnancy (RR = 0.62; P = 0.035). In addition, maternal weight gain was significantly reduced (WMD = −1.18 kg; 95% CI −1.54 to −0.85; P < 0.001). There were no significant adverse effects reported for either the mother or the neonate, confirming that exercise interventions are safe for both. This meta-analysis highlights the positive impact of regular moderate physical activity during pregnancy in reducing the risk of GDM and managing maternal weight gain. These findings suggest that physical exercise should be encouraged as a routine part of prenatal care. However, more research is required to refine exercise recommendations and determine the most effective interventions based on individual risk factors and pregnancy stages.Keywords: gestational diabetes, maternal weight management, meta-analysis, randomized controlled trials
Procedia PDF Downloads 4547 Two-wavelength High-energy Cr:LiCaAlF6 MOPA Laser System for Medical Multispectral Optoacoustic Tomography
Authors: Radik D. Aglyamov, Alexander K. Naumov, Alexey A. Shavelev, Oleg A. Morozov, Arsenij D. Shishkin, Yury P.Brodnikovsky, Alexander A.Karabutov, Alexander A. Oraevsky, Vadim V. Semashko
Abstract:
The development of medical optoacoustic tomography with the using human blood as endogenic contrast agent is constrained by the lack of reliable, easy-to-use and inexpensive sources of high-power pulsed laser radiation in the spectral region of 750-900 nm [1-2]. Currently used titanium-sapphire, alexandrite lasers or optical parametric light oscillators do not provide the required and stable output characteristics, they are structurally complex, and their cost is up to half the price of diagnostic optoacoustic systems. Here we are developing the lasers based on Cr:LiCaAlF6 crystals which are free of abovementioned disadvantages and provides intensive ten’s ns-range tunable laser radiation at specific absorption bands of oxy- (~840 nm) and -deoxyhemoglobin (~757 nm) in the blood. Cr:LiCAF (с=3 at.%) crystals were grown in Kazan Federal University by the vertical directional crystallization (Bridgman technique) in graphite crucibles in a fluorinating atmosphere at argon overpressure (P=1500 hPa) [3]. The laser elements have cylinder shape with the diameter of 8 mm and 90 mm in length. The direction of the optical axis of the crystal was normal to the cylinder generatrix, which provides the π-polarized laser action correspondent to maximal stimulated emission cross-section. The flat working surfaces of the active elements were polished and parallel to each other with an error less than 10”. No any antireflection coating was applied. The Q-switched master oscillator-power amplifiers laser system (MOPA) with the dual-Xenon flashlamp pumping scheme in diffuse-reflectivity close-coupled head were realized. A specially designed laser cavity, consisting of dielectric highly reflective reflectors with a 2 m-curvature radius, a flat output mirror, a polarizer and Q-switch sell, makes it possible to operate sequentially in a circle (50 ns - laser one pulse after another) at wavelengths of 757 and 840 nm. The programmable pumping system from Tomowave Laser LLC (Russia) provided independent to each pulses (up to 250 J at 180 μs) pumping to equalize the laser radiation intensity at these wavelengths. The MOPA laser operates at 10 Hz pulse repetition rate with the output energy up to 210 mJ. Taking into account the limitations associated with physiological movements and other characteristics of patient tissues, the duration of laser pulses and their energy allows molecular and functional high-contrast imaging to depths of 5-6 cm with a spatial resolution of at least 1 mm. Highly likely the further comprehensive design of laser allows improving the output properties and realizing better spatial resolution of medical multispectral optoacoustic tomography systems.Keywords: medical optoacoustic, endogenic contrast agent, multiwavelength tunable pulse lasers, MOPA laser system
Procedia PDF Downloads 99546 Broadband Ultrasonic and Rheological Characterization of Liquids Using Longitudinal Waves
Authors: M. Abderrahmane Mograne, Didier Laux, Jean-Yves Ferrandis
Abstract:
Rheological characterizations of complex liquids like polymer solutions present an important scientific interest for a lot of researchers in many fields as biology, food industry, chemistry. In order to establish master curves (elastic moduli vs frequency) which can give information about microstructure, classical rheometers or viscometers (such as Couette systems) are used. For broadband characterization of the sample, temperature is modified in a very large range leading to equivalent frequency modifications applying the Time Temperature Superposition principle. For many liquids undergoing phase transitions, this approach is not applicable. That is the reason, why the development of broadband spectroscopic methods around room temperature becomes a major concern. In literature many solutions have been proposed but, to our knowledge, there is no experimental bench giving the whole rheological characterization for frequencies about a few Hz (Hertz) to many MHz (Mega Hertz). Consequently, our goal is to investigate in a nondestructive way in very broadband frequency (A few Hz – Hundreds of MHz) rheological properties using longitudinal ultrasonic waves (L waves), a unique experimental bench and a specific container for the liquid: a test tube. More specifically, we aim to estimate the three viscosities (longitudinal, shear and bulk) and the complex elastic moduli (M*, G* and K*) respectively longitudinal, shear and bulk moduli. We have decided to use only L waves conditioned in two ways: bulk L wave in the liquid or guided L waves in the tube test walls. In this paper, we will present first results for very low frequencies using the ultrasonic tracking of a falling ball in the test tube. This will lead to the estimation of shear viscosity from a few mPa.s to a few Pa.s (Pascal second). Corrections due to the small dimensions of the tube will be applied and discussed regarding the size of the falling ball. Then the use of bulk L wave’s propagation in the liquid and the development of a specific signal processing in order to assess longitudinal velocity and attenuation will conduct to the longitudinal viscosity evaluation in the MHz frequency range. At last, the first results concerning the propagation, the generation and the processing of guided compressional waves in the test tube walls will be discussed. All these approaches and results will be compared to standard methods available and already validated in our lab.Keywords: nondestructive measurement for liquid, piezoelectric transducer, ultrasonic longitudinal waves, viscosities
Procedia PDF Downloads 264545 The AU Culture Platform Approach to Measure the Impact of Cultural Participation on Individuals
Authors: Sendy Ghirardi, Pau Rausell Köster
Abstract:
The European Commission increasingly pushes cultural policies towards social outcomes and local and regional authorities also call for culture-driven strategies for local development and prosperity and therefore, the measurement of cultural participation becomes increasingly more significant for evidence-based policy-making processes. Cultural participation involves various kinds of social and economic spillovers that combine social and economic objectives of value creation, including social sustainability and respect for human values. Traditionally, from the economic perspective, cultural consumption is measured by the value of financial transactions in purchasing, subscribing to, or renting cultural equipment and content, addressing the market value of cultural products and services. The main sources of data are the household spending survey and merchandise trade survey, among others. However, what characterizes the cultural consumption is that it is linked with the hedonistic and affective dimension rather than the utilitarian one. In fact, nowadays, more and more attention is being paid to the social and psychological dimensions of culture. The aim of this work is to present a comprehensive approach to measure the impacts of cultural participation and cultural users’ behaviour, combining both socio-psychological and economic approaches. The model combines contingent evaluation techniques with the individual characteristic and perception analysis of the cultural experiences to evaluate the cognitive, aesthetic, emotive and social impacts of cultural participation. To investigate the comprehensive approach to measure the impact of the cultural events on individuals, the research has been designed on the basis of prior theoretical development. A deep literature methodology has been done to develop the theoretical model applied to the web platform to measure the impacts of cultural experience on individuals. The developed framework aims to become a democratic tool for evaluating the services that cultural or policy institutions can adopt through the use of an interacting platform that produces big data benefiting academia, cultural management and policies. The Au Culture is a prototype based on an application that can be used on mobile phones or any other digital platform. The development of the AU Culture Platform has been funded by the Valencian Innovation Agency (Government of the Region of Valencia) and it is part of the Horizon 2020 project MESOC.Keywords: comprehensive approach, cultural participation, economic dimension, socio-psychological dimension
Procedia PDF Downloads 113544 Spectrophotometric Detection of Histidine Using Enzyme Reaction and Examination of Reaction Conditions
Authors: Akimitsu Kugimiya, Kouhei Iwato, Toru Saito, Jiro Kohda, Yasuhisa Nakano, Yu Takano
Abstract:
The measurement of amino acid content is reported to be useful for the diagnosis of several types of diseases, including lung cancer, gastric cancer, colorectal cancer, breast cancer, prostate cancer, and diabetes. The conventional detection methods for amino acid are high-performance liquid chromatography (HPLC) and liquid chromatography-mass spectrometry (LC-MS), but they have several drawbacks as the equipment is cumbersome and the techniques are costly in terms of time and costs. In contrast, biosensors and biosensing methods provide more rapid and facile detection strategies that use simple equipment. The authors have reported a novel approach for the detection of each amino acid that involved the use of aminoacyl-tRNA synthetase (aaRS) as a molecular recognition element because aaRS is expected to a selective binding ability for corresponding amino acid. The consecutive enzymatic reactions used in this study are as follows: aaRS binds to its cognate amino acid and releases inorganic pyrophosphate. Hydrogen peroxide (H₂O₂) was produced by the enzyme reactions of inorganic pyrophosphatase and pyruvate oxidase. The Trinder’s reagent was added into the reaction mixture, and the absorbance change at 556 nm was measured using a microplate reader. In this study, an amino acid-sensing method using histidyl-tRNA synthetase (HisRS; histidine-specific aaRS) as molecular recognition element in combination with the Trinder’s reagent spectrophotometric method was developed. The quantitative performance and selectivity of the method were evaluated, and the optimal enzyme reaction and detection conditions were determined. The authors developed a simple and rapid method for detecting histidine with a combination of enzymatic reaction and spectrophotometric detection. In this study, HisRS was used to detect histidine, and the reaction and detection conditions were optimized for quantitation of these amino acids in the ranges of 1–100 µM histidine. The detection limits are sufficient to analyze these amino acids in biological fluids. This work was partly supported by Hiroshima City University Grant for Special Academic Research (General Studies).Keywords: amino acid, aminoacyl-tRNA synthetase, biosensing, enzyme reaction
Procedia PDF Downloads 283543 Fluorescence-Based Biosensor for Dopamine Detection Using Quantum Dots
Authors: Sylwia Krawiec, Joanna Cabaj, Karol Malecha
Abstract:
Nowadays, progress in the field of the analytical methods is of great interest for reliable biological research and medical diagnostics. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements. Chemical sensors have displaced the conventional analytical methods - sensors combine precision, sensitivity, fast response and the possibility of continuous-monitoring. Biosensor is a chemical sensor, which except of conventer also possess a biologically active material, which is the basis for the detection of specific chemicals in the sample. Each biosensor device mainly consists of two elements: a sensitive element, where is recognition of receptor-analyte, and a transducer element which receives the signal and converts it into a measurable signal. Through these two elements biosensors can be divided in two categories: due to the recognition element (e.g immunosensor) and due to the transducer (e.g optical sensor). Working of optical sensor is based on measurements of quantitative changes of parameters characterizing light radiation. The most often analyzed parameters include: amplitude (intensity), frequency or polarization. Changes in the optical properties one of the compound which reacts with biological material coated on the sensor is analyzed by a direct method, in an indirect method indicators are used, which changes the optical properties due to the transformation of the testing species. The most commonly used dyes in this method are: small molecules with an aromatic ring, like rhodamine, fluorescent proteins, for example green fluorescent protein (GFP), or nanoparticles such as quantum dots (QDs). Quantum dots have, in comparison with organic dyes, much better photoluminescent properties, better bioavailability and chemical inertness. These are semiconductor nanocrystals size of 2-10 nm. This very limited number of atoms and the ‘nano’-size gives QDs these highly fluorescent properties. Rapid and sensitive detection of dopamine is extremely important in modern medicine. Dopamine is very important neurotransmitter, which mainly occurs in the brain and central nervous system of mammals. Dopamine is responsible for the transmission information of moving through the nervous system and plays an important role in processes of learning or memory. Detection of dopamine is significant for diseases associated with the central nervous system such as Parkinson or schizophrenia. In developed optical biosensor for detection of dopamine, are used graphene quantum dots (GQDs). In such sensor dopamine molecules coats the GQD surface - in result occurs quenching of fluorescence due to Resonance Energy Transfer (FRET). Changes in fluorescence correspond to specific concentrations of the neurotransmitter in tested sample, so it is possible to accurately determine the concentration of dopamine in the sample.Keywords: biosensor, dopamine, fluorescence, quantum dots
Procedia PDF Downloads 362542 Towards Sustainable Concrete: Maturity Method to Evaluate the Effect of Curing Conditions on the Strength Development in Concrete Structures under Kuwait Environmental Conditions
Authors: F. Al-Fahad, J. Chakkamalayath, A. Al-Aibani
Abstract:
Conventional methods of determination of concrete strength under controlled laboratory conditions will not accurately represent the actual strength of concrete developed under site curing conditions. This difference in strength measurement will be more in the extreme environment in Kuwait as it is characterized by hot marine environment with normal temperature in summer exceeding 50°C accompanied by dry wind in desert areas and salt laden wind on marine and on shore areas. Therefore, it is required to have test methods to measure the in-place properties of concrete for quality assurance and for the development of durable concrete structures. The maturity method, which defines the strength of a given concrete mix as a function of its age and temperature history, is an approach for quality control for the production of sustainable and durable concrete structures. The unique harsh environmental conditions in Kuwait make it impractical to adopt experiences and empirical equations developed from the maturity methods in other countries. Concrete curing, especially in the early age plays an important role in developing and improving the strength of the structure. This paper investigates the use of maturity method to assess the effectiveness of three different types of curing methods on the compressive and flexural strength development of one high strength concrete mix of 60 MPa produced with silica fume. This maturity approach was used to predict accurately, the concrete compressive and flexural strength at later ages under different curing conditions. Maturity curves were developed for compressive and flexure strengths for a commonly used concrete mix in Kuwait, which was cured using three different curing conditions, including water curing, external spray coating and the use of internal curing compound during concrete mixing. It was observed that the maturity curve developed for the same mix depends on the type of curing conditions. It can be used to predict the concrete strength under different exposure and curing conditions. This study showed that concrete curing with external spray curing method cannot be recommended to use as it failed to aid concrete in reaching accepted values of strength, especially for flexural strength. Using internal curing compound lead to accepted levels of strength when compared with water cuing. Utilization of the developed maturity curves will help contactors and engineers to determine the in-place concrete strength at any time, and under different curing conditions. This will help in deciding the appropriate time to remove the formwork. The reduction in construction time and cost has positive impacts towards sustainable construction.Keywords: curing, durability, maturity, strength
Procedia PDF Downloads 300541 DFT Theoretical Investigation for Evaluating Global Scalar Properties and Validating with Quantum Chemical Based COSMO-RS Theory for Dissolution of Bituminous and Anthracite Coal in Ionic Liquid
Authors: Debanjan Dey, Tamal Banerjee, Kaustubha Mohanty
Abstract:
Global scalar properties are calculated based on higher occupied molecular orbital (HOMO) and lower unoccupied molecular orbital (LUMO) energy to study the interaction between ionic liquids with Bituminous and Anthracite coal using density function theory (DFT) method. B3LYP/6-31G* calculation predicts HOMO-LUMO energy gap, electronegativity, global hardness, global softness, chemical potential and global softness for individual compounds with their clusters. HOMO-LUMO interaction, electron delocalization, electron donating and accepting is the main source of attraction between individual compounds with their complexes. Cation used in this study: 1-butyl-1-methylpyrrolidinium [BMPYR], 1-methyl -3-propylimmidazolium [MPIM], Tributylmethylammonium [TMA] and Tributylmethylphosphonium [MTBP] with the combination of anion: bis(trifluromethylsulfonyl)imide [Tf2N], methyl carbonate [CH3CO3], dicyanamide [N(CN)2] and methylsulfate [MESO4]. Basically three-tier approach comprising HOMO/LUMO energy, Scalar quantity and infinite dilution activity coefficient (IDAC) by sigma profile generation with COSMO-RS (Conductor like screening model for real solvent) model was chosen for simultaneous interaction. [BMPYR]CH3CO3] (1-butyl-1-methylpyrrolidinium methyl carbonate) and [MPIM][CH3CO3] (1-methyl -3-propylimmidazolium methyl carbonate ) are the best effective ILs on the basis of HOMO-LUMO band gap for Anthracite and Bituminous coal respectively and the corresponding band gap is 0.10137 hartree for Anthracite coal and 0.12485 hartree for Bituminous coal. Further ionic liquids are screened quantitatively with all the scalar parameters and got the same result based on CH-π interaction which is found for HOMO-LUMO gap. To check our findings IDAC were predicted using quantum chemical based COSMO-RS methodology which gave the same trend as observed our scalar quantity calculation. Thereafter a qualitative measurement is doing by sigma profile analysis which gives complementary behavior between IL and coal that means highly miscible with each other.Keywords: coal-ionic liquids cluster, COSMO-RS, DFT method, HOMO-LUMO interaction
Procedia PDF Downloads 303540 Shoulder Range of Motion Measurements using Computer Vision Compared to Hand-Held Goniometric Measurements
Authors: Lakshmi Sujeesh, Aaron Ramzeen, Ricky Ziming Guo, Abhishek Agrawal
Abstract:
Introduction: Range of motion (ROM) is often measured by physiotherapists using hand-held goniometer as part of mobility assessment for diagnosis. Due to the nature of hand-held goniometer measurement procedure, readings often tend to have some variations depending on the physical therapist taking the measurements (Riddle et al.). This study aims to validate computer vision software readings against goniometric measurements for quick and consistent ROM measurements to be taken by clinicians. The use of this computer vision software hopes to improve the future of musculoskeletal space with more efficient diagnosis from recording of patient’s ROM with minimal human error across different physical therapists. Methods: Using the hand-held long arm goniometer measurements as the “gold-standard”, healthy study participants (n = 20) were made to perform 4 exercises: Front elevation, Abduction, Internal Rotation, and External Rotation, using both arms. Assessment of active ROM using computer vision software at different angles set by goniometer for each exercise was done. Interclass Correlation Coefficient (ICC) using 2-way random effects model, Box-Whisker plots, and Root Mean Square error (RMSE) were used to find the degree of correlation and absolute error measured between set and recorded angles across the repeated trials by the same rater. Results: ICC (2,1) values for all 4 exercises are above 0.9, indicating excellent reliability. Lowest overall RMSE was for external rotation (5.67°) and highest for front elevation (8.00°). Box-whisker plots showed have showed that there is a potential zero error in the measurements done by the computer vision software for abduction, where absolute error for measurements taken at 0 degree are shifted away from the ideal 0 line, with its lowest recorded error being 8°. Conclusion: Our results indicate that the use of computer vision software is valid and reliable to use in clinical settings by physiotherapists for measuring shoulder ROM. Overall, computer vision helps improve accessibility to quality care provided for individual patients, with the ability to assess ROM for their condition at home throughout a full cycle of musculoskeletal care (American Academy of Orthopaedic Surgeons) without the need for a trained therapist.Keywords: physiotherapy, frozen shoulder, joint range of motion, computer vision
Procedia PDF Downloads 105539 Measuring Fluctuating Asymmetry in Human Faces Using High-Density 3D Surface Scans
Authors: O. Ekrami, P. Claes, S. Van Dongen
Abstract:
Fluctuating asymmetry (FA) has been studied for many years as an indicator of developmental stability or ‘genetic quality’ based on the assumption that perfect symmetry is ideally the expected outcome for a bilateral organism. Further studies have also investigated the possible link between FA and attractiveness or levels of masculinity or femininity. These hypotheses have been mostly examined using 2D images, and the structure of interest is usually presented using a limited number of landmarks. Such methods have the downside of simplifying and reducing the dimensionality of the structure, which will in return increase the error of the analysis. In an attempt to reach more conclusive and accurate results, in this study we have used high-resolution 3D scans of human faces and have developed an algorithm to measure and localize FA, taking a spatially-dense approach. A symmetric spatially dense anthropometric mask with paired vertices is non-rigidly mapped on target faces using an Iterative Closest Point (ICP) registration algorithm. A set of 19 manually indicated landmarks were used to examine the precision of our mapping step. The protocol’s accuracy in measurement and localizing FA is assessed using simulated faces with known amounts of asymmetry added to them. The results of validation of our approach show that the algorithm is perfectly capable of locating and measuring FA in 3D simulated faces. With the use of such algorithm, the additional captured information on asymmetry can be used to improve the studies of FA as an indicator of fitness or attractiveness. This algorithm can especially be of great benefit in studies of high number of subjects due to its automated and time-efficient nature. Additionally, taking a spatially dense approach provides us with information about the locality of FA, which is impossible to obtain using conventional methods. It also enables us to analyze the asymmetry of a morphological structures in a multivariate manner; This can be achieved by using methods such as Principal Components Analysis (PCA) or Factor Analysis, which can be a step towards understanding the underlying processes of asymmetry. This method can also be used in combination with genome wide association studies to help unravel the genetic bases of FA. To conclude, we introduced an algorithm to study and analyze asymmetry in human faces, with the possibility of extending the application to other morphological structures, in an automated, accurate and multi-variate framework.Keywords: developmental stability, fluctuating asymmetry, morphometrics, 3D image processing
Procedia PDF Downloads 139538 Evaluation of the Effectiveness of Crisis Management Support Bases in Tehran
Authors: Sima Hajiazizi
Abstract:
Tehran is a capital of Iran, with the capitals of the world to natural disasters such as earthquake and flood vulnerable has known. City has stated on three faults, Ray, Mosha, and north according to report of JICA in 2000, the most casualties and destruction was the result of active fault Ray. In 2003, the prevention and management of crisis in Tehran to conduct prevention and rehabilitation of the city, under the Ministry has active. Given the breadth and lack of appropriate access in the city, was considered decentralized management for crisis management support, in each region, in order to position the crisis management headquarters at the time of crises and implementation of programs for prevention and education of the citizens and also to position the bases given in some areas of the neighboring provinces at the time of the accident for help and a number of databases to store food and equipment needed at the time of the disaster. In this study, the bases for one, six, nine and eleven regions of Tehran in the field of management and training are evaluated. Selected areas had local accident and experience of practice for disaster management and local training has been experiencing challenges. The research approach was used qualitative research methods underlying Ground theory. At first, the information obtained through the study of documents and Semi-structured interviews by administrators, officials of training and participant observation in the classroom, line by line, and then it was coded in two stages, by comparing and questioning concepts, categories and extract according to the indicators is obtained from literature studies, subjects were been central. Main articles according to the frequency and importance of the phenomenon were called and they were drawn diagram paradigm and at the end with the intersections phenomena and their causes with indicators extracted from the texts, approach each phenomenon and the effectiveness of the bases was measured. There are two phenomenons in management; 1. The inability to manage the vast and complex crisis events and to resolve minor incidents due to the mismatch between managers. 2. Weaknesses in the implementation of preventive measures and preparedness to manage crisis is causal of situations, fields and intervening. There are five phenomenons in the field of education; 1. In the six-region participation and interest is high. 2. In eleven-region training partnerships for crisis management were to low that next by maneuver in schools and local initiatives such as advertising and use of aid groups have increased. 3. In nine-region, contributions to education in the area of crisis management at the beginning were low that initiatives like maneuver in schools and communities to stimulate and increase participation have increased sensitivity. 4. Managers have been disagreement with the same training in all areas. Finally for the issues that are causing the main issues, with the help of concepts extracted from the literature, recommendations are provided.Keywords: crises management, crisis management support bases, vulnerability, crisis management headquarters, prevention
Procedia PDF Downloads 173537 Characterizing Solid Glass in Bending, Torsion and Tension: High-Temperature Dynamic Mechanical Analysis up to 950 °C
Authors: Matthias Walluch, José Alberto Rodríguez, Christopher Giehl, Gunther Arnold, Daniela Ehgartner
Abstract:
Dynamic mechanical analysis (DMA) is a powerful method to characterize viscoelastic properties and phase transitions for a wide range of materials. It is often used to characterize polymers and their temperature-dependent behavior, including thermal transitions like the glass transition temperature Tg, via determination of storage and loss moduli in tension (Young’s modulus, E) and shear or torsion (shear modulus, G) or other testing modes. While production and application temperatures for polymers are often limited to several hundred degrees, material properties of glasses usually require characterization at temperatures exceeding 600 °C. This contribution highlights a high temperature setup for rotational and oscillatory rheometry as well as for DMA in different modes. The implemented standard convection oven enables the characterization of glass in different loading modes at temperatures up to 950 °C. Three-point bending, tension and torsional measurements on different glasses, with E and G moduli as a function of frequency and temperature, are presented. Additional tests include superimposing several frequencies in a single temperature sweep (“multiwave”). This type of test results in a considerable reduction of the experiment time and allows to evaluate structural changes of the material and their frequency dependence. Furthermore, DMA in torsion and tension was performed to determine the complex Poisson’s ratio as a function of frequency and temperature within a single test definition. Tests were performed in a frequency range from 0.1 to 10 Hz and temperatures up to the glass transition. While variations in the frequency did not reveal significant changes of the complex Poisson’s ratio of the glass, a monotonic increase of this parameter was observed when increasing the temperature. This contribution outlines the possibilities of DMA in bending, tension and torsion for an extended temperature range. It allows the precise mechanical characterization of material behavior from room temperature up to the glass transition and the softening temperature interval. Compared to other thermo-analytical methods, like Dynamic Scanning Calorimetry (DSC) where mechanical stress is neglected, the frequency-dependence links measurement results (e.g. relaxation times) to real applicationsKeywords: dynamic mechanical analysis, oscillatory rheometry, Poisson's ratio, solid glass, viscoelasticity
Procedia PDF Downloads 80536 Aquatic Therapy Improving Balance Function of Individuals with Stroke: A Systematic Review with Meta-Analysis
Authors: Wei-Po Wu, Wen-Yu Liu, Wei−Ting Lin, Hen-Yu Lien
Abstract:
Introduction: Improving balance function for individuals after stroke is a crucial target in physiotherapy. Aquatic therapy which challenges individual’s postural control in an unstable fluid environment may be beneficial in enhancing balance functions. The purposes of the systematic review with meta-analyses were to validate the effects of aquatic therapy in improving balance functions for individuals with strokes in contrast to conventional physiotherapy. Method: Available studies were explored from three electronic databases: PubMed, Scopus, and Web of Science. During literature search, the published date of studies was not limited. The study design of the included studies should be randomized controlled trials (RCTs) and the studies should contain at least one outcome measurement of balance function. The PEDro scale was adopted to assess the quality of included studies, while the 'Oxford Centre for Evidence-Based Medicine 2011 Levels of Evidence' was used to evaluate the level of evidence. After the data extraction, studies with same outcome measures were pooled together for meta-analysis. Result: Ten studies with 282 participants were included in analyses. The research qualities of the studies were ranged from fair to good (4 to 8 points). Levels of evidence of the included studies were graded as level 2 and 3. Finally, scores of Berg Balance Scale (BBS), Eye closed force plate center of pressure velocity (anterior-posterior, medial-lateral axis) and Timed up and Go test were pooled and analyzed separately. The pooled results shown improvement in balance function (BBS mean difference (MD): 1.39 points; 95% confidence interval (CI): 0.05-2.29; p=0.002) (Eye closed force plate center of pressure velocity (anterior-posterior axis) MD: 1.39 mm/s; 95% confidence interval (CI): 0.93-1.86; p<0.001) (Eye closed force plate center of pressure velocity (medial-lateral) MD: 1.48 mm/s; 95% confidence interval (CI): 0.15-2.82; p=0.03) and mobility (MD: 0.9 seconds; 95% CI: 0.07-1.73; p=0.03) of stroke individuals after aquatic therapy compared to conventional therapy. Although there were significant differences between two treatment groups, the differences in improvement were relatively small. Conclusion: The aquatic therapy improved general balance function and mobility in the individuals with stroke better than conventional physiotherapy.Keywords: aquatic therapy, balance function, meta-analysis, stroke, systematic review
Procedia PDF Downloads 200535 Serological IgG Testing to Diagnose Alimentary Induced Diseases and Monitoring Efficacy of an Individual Defined Diet in Dogs
Authors: Anne-Margré C. Vink
Abstract:
Background: Food-related allergies and intolerances are frequently occurring in dogs. Diagnosis and monitoring according to ‘Golden Standard’ of elimination efficiency are time-consuming, expensive, and requires expert clinical setting. In order to facilitate rapid and robust, quantitative testing of intolerance, and determining the individual offending foods, a serological test is implicated. Method: As we developed Medisynx IgG Human Screening Test ELISA before and the dog’s immune system is most similar to humans, we were able to develop Medisynx IgG Dog Screening Test ELISA as well. In this study, 47 dogs suffering from Canine Atopic Dermatitis (CAD) and several secondary induced reactions were included to participate in serological Medisynx IgG Dog Screening Test ELISA (within < 0,02 % SD). Results were expressed as titers relative to the standard OD readings to diagnose alimentary induced diseases and monitoring the efficacy of an individual eliminating diet in dogs. Split sample analysis was performed by independently sending 2 times 3 ml serum under two unique codes. Results: The veterinarian monitored these dogs to check dog’ results at least at 3, 7, 21, 49, 70 days and after period of 6 and 12 months on an individual negative diet and a positive challenge (retrospectively) at 6 months. Data of each dog were recorded in a screening form and reported that a complete recovery of all clinical manifestations was observed at or less than 70 days (between 50 and 70 days) in the majority of dogs(44 out of 47 dogs =93.6%). Conclusion: Challenge results showed a significant result of 100% in specificity as well as 100% positive predicted value. On the other hand, sensitivity was 95,7% and negative predictive value was 95,7%. In conclusion, an individual diet based on IgG ELISA in dogs provides a significant improvement of atopic dermatitis and pruritus including all other non-specific defined allergic skin reactions as erythema, itching, biting and gnawing at toes, as well as to several secondary manifestations like chronic diarrhoea, chronic constipation, otitis media, obesity, laziness or inactive behaviour, pain and muscular stiffness causing a movement disorders, excessive lacrimation, hyper behaviour, nervous behaviour and not possible to stay alone at home, anxiety, biting and aggressive behaviour and disobedience behaviour. Furthermore, we conclude that a relatively more severe systemic candidiasis, as shown by relatively higher titer (class 3 and 4 IgG reactions to Candida albicans), influence the duration of recovery from clinical manifestations in affected dogs. These findings are consistent with our preliminary human clinical studies.Keywords: allergy, canine atopic dermatitis, CAD, food allergens, IgG-ELISA, food-incompatibility
Procedia PDF Downloads 320534 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning
Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga
Abstract:
Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter
Procedia PDF Downloads 211533 Effect of Different Phosphorus Levels on Vegetative Growth of Maize Variety
Authors: Tegene Nigussie
Abstract:
Introduction: Maize is the most domesticated of all the field crops. Wild maize has not been found to date and there has been much speculation on its origin. Regardless of the validity of different theories, it is generally agreed that the center of origin of maize is Central America, primarily Mexico and the Caribbean. Maize in Africa is of a recent introduction although data suggest that it was present in Nigeria even before Columbus voyages. After being taken to Europe in 1493, maize was introduced to Africa and distributed (spread through the continent by different routes. Maize is an important cereal crop in Ethiopia in general, it is the primarily stable food, and rural households show strong preference. For human food, the important constituents of grain are carbohydrates (starch and sugars), protein, fat or oil (in the embryo) and minerals. About 75 percent of the kernel is starch, a range of 60.80 percent but low protein content (8-15%). In Ethiopia, the introduction of modern farming techniques appears to be a priority. However, the adoption of modern inputs by peasant farmers is found to be very slow, for example, the adoption rate of fertilizer, an input that is relatively adopted, is very slow. The difference in socio-economic factors lay behind the low rate of technological adoption, including price & marketing input. Objective: The aim of the study is to determine the optimum application rate or level of different phosphorus fertilizers for the vegetative growth of maize and to identify the effect of different phosphorus rates on the growth and development of maize. Methods: The vegetative parameter (above ground) measurement from five plants randomly sampled from the middle rows of each plot. Results: The interaction of nitrogen and maize variety showed a significant at (p<0.01) effect on plant height, with the application of 60kg/ha and BH140 maize variety in combination and root length with the application of 60kg/ha of nitrogen and BH140 variety of maize. The highest mean (12.33) of the number of leaves per plant and mean (7.1) of the number of nodes per plant can be used as an alternative for better vegetative growth of maize. Conclusion and Recommendation: Maize is one of the popular and cultivated crops in Ethiopia. This study was conducted to investigate the best dosage of phosphorus for vegetative growth, yield, and better quality of maize variety and to recommend a level of phosphorus rate and the best variety adaptable to the specific soil condition or area.Keywords: leaf, carbohydrate protein, adoption, sugar
Procedia PDF Downloads 8532 Understanding the Diversity of Antimicrobial Resistance among Wild Animals, Livestock and Associated Environment in a Rural Ecosystem in Sri Lanka
Authors: B. M. Y. I. Basnayake, G. G. T. Nisansala, P. I. J. B. Wijewickrama, U. S. Weerathunga, K. W. M. Y. D. Gunasekara, N. K. Jayasekera, A. W. Kalupahana, R. S. Kalupahana, A. Silva- Fletcher, K. S. A. Kottawatta
Abstract:
Antimicrobial resistance (AMR) has attracted significant attention worldwide as an emerging threat to public health. Understanding the role of livestock and wildlife with the shared environment in the maintenance and transmission of AMR is of utmost importance due to its interactions with humans for combating the issue in one health approach. This study aims to investigate the extent of AMR distribution among wild animals, livestock, and environment cohabiting in a rural ecosystem in Sri Lanka: Hambegamuwa. One square km area at Hambegamuwa was mapped using GPS as the sampling area. The study was conducted for a period of five months from November 2020. Voided fecal samples were collected from 130 wild animals, 123 livestock: buffalo, cattle, chicken, and turkey, with 36 soil and 30 water samples associated with livestock and wildlife. From the samples, Escherichia coli (E. coli) was isolated, and their AMR profiles were investigated for 12 antimicrobials using the disk diffusion method following the CLSI standard. Seventy percent (91/130) of wild animals, 93% (115/123) of livestock, 89% (32/36) of soil, and 63% (19/30) of water samples were positive for E. coli. Maximum of two E. coli from each sample to a total of 467 were tested for the sensitivity of which 157, 208, 62, and 40 were from wild animals, livestock, soil, and water, respectively. The highest resistance in E. coli from livestock (13.9%) and wild animals (13.3%) was for ampicillin, followed by streptomycin. Apart from that, E. coli from livestock and wild animals revealed resistance mainly against tetracycline, cefotaxime, trimethoprim/ sulfamethoxazole, and nalidixic acid at levels less than 10%. Ten cefotaxime resistant E. coli were reported from wild animals, including four elephants, two land monitors, a pigeon, a spotted dove, and a monkey which was a significant finding. E. coli from soil samples reflected resistance primarily against ampicillin, streptomycin, and tetracycline at levels less than in livestock/wildlife. Two water samples had cefotaxime resistant E. coli as the only resistant isolates out of 30 water samples tested. Of the total E. coli isolates, 6.4% (30/467) was multi-drug resistant (MDR) which included 18, 9, and 3 isolates from livestock, wild animals, and soil, respectively. Among 18 livestock MDRs, the highest (13/ 18) was from poultry. Nine wild animal MDRs were from spotted dove, pigeon, land monitor, and elephant. Based on CLSI standard criteria, 60 E. coli isolates, of which 40, 16, and 4 from livestock, wild animal, and environment, respectively, were screened for Extended Spectrum β-Lactamase (ESBL) producers. Despite being a rural ecosystem, AMR and MDR are prevalent even at low levels. E. coli from livestock, wild animals, and the environment reflected a similar spectrum of AMR where ampicillin, streptomycin, tetracycline, and cefotaxime being the predominant antimicrobials of resistance. Wild animals may have acquired AMR via direct contact with livestock or via the environment, as antimicrobials are rarely used in wild animals. A source attribution study including the effects of the natural environment to study AMR can be proposed as this less contaminated rural ecosystem alarms the presence of AMR.Keywords: AMR, Escherichia coli, livestock, wildlife
Procedia PDF Downloads 215531 Prevalence of Positive Serology for Celiac Disease in Children With Autism Spectrum Disorder
Authors: A. Venkatakrishnan, M. Juneja, S. Kapoor
Abstract:
Background: Gastrointestinal dysfunction is an emerging co morbidity seen in autism and may further strengthen the association between autism and celiac disease. This is supported by increased rates (22-70%) of gastrointestinal symptoms like diarrhea, constipation, abdominal discomfort/pain, and gastrointestinal inflammation in children with the etiology of autism is still elusive. In addition to genetic factors, environmental factors such as toxin exposure, intrauterine exposure to certain teratogenic drugs, are being proposed as possible contributing factors in the etiology of Autism Spectrum Disorders (ASD) in cognizance with reports of increased gut permeability and high rates of gastrointestinal symptoms noted in children with ASD, celiac disease has also been proposed as a possible etiological factor. Despite insufficient evidence regarding the benefit of restricted diets in Autism, GFD has been promoted as an alternative treatment for ASD. This study attempts to discern any correlation between ASD and celiac disease. Objective: This cross sectional study aims to determine the proportion of celiac disease in children with ASD. Methods: Study included 155 participants aged 2-12 yrs, diagnosed as ASD as per DSM-5 attending the child development center at a tertiary care hospital in Northern India. Those on gluten free diet or having other autoimmune conditions were excluded. A detailed Performa was filled which included sociodemographic details, history of gastrointestinal symptoms, anthropometry, systemic examination, and pertinent psychological testing was done using was assessed using Developmental Profile-3(DP-3) for Developmental Quotient, Childhood Autism Rating Scale-2 (CARS-2) for severity of ASD, Vineland Adaptive Behavior Scales (VABS) for adaptive behavior, Child Behavior Checklist (CBCL) for behavioral problems and BAMBI (Brief Autism Mealtime Behavior Scales) for feeding problems. Screening for celiac was done by TTG-IgA levels, and total serum IgA levels were measured to exclude IgA deficiency. Those with positive screen were further planned for HLA typing and endoscopic biopsy. Results: A total of 155 cases were included, out of which 5 had low IgA levels and were hence excluded from the study. The rest 150 children had TTG levels below the ULN and normal total serum IgA level. History of Gastrointestinal symptoms was present in 51 (34%) cases abdominal pain was the most frequent complaint (16.6%), followed by constipation (12.6%). Diarrhea was seen in 8 %. Gastrointestinal symptoms were significantly more common in children with ASD above 5 yrs (p-value 0.006) and those who were verbal (p = 0.000). There was no significant association between socio-demographic factors, anthropometric data, or severity of autism with gastrointestinal symptoms. Conclusion: None of the150 patients with ASD had raised TTG levels; hence no association was found between ASD and celiac disease. There is no justification for routine screening for celiac disease in children with ASD. Further studies are warranted to evaluate association of Non Celiac Gluten Sensitivity with ASD and any role of gluten-free diet in such patients.Keywords: autism, celiac, gastrointestinal, gluten
Procedia PDF Downloads 119530 Promoting Self-Esteem and Social Integration in Secondary German Schools: An Evaluation Study
Authors: Susanne Manes, Anni Glaeser, Katharina Wick, Bernhard Strauss, Uwe Berger
Abstract:
Introduction: Over the last decades growing rates of mental health concerns among children and adolescents have been observed. At the same time, physical well-being of children and adolescents becomes increasingly impaired as well. Schools play an important role in preventing mental and physical disorders and in promoting well-being. Self-esteem, as well as social integration, are vital influence factors for mental and physical well-being. The purpose of this study was to develop and evaluate the program 'VorteilJena' for secondary schools in Germany focusing on self-esteem and social integration to improve mental and physical well-being. Method: The school-based health promotion program was designed for students in 5th grade and higher. It consists of several short pedagogical exercises instructed by a teacher and were integrated into the regular class over the course of ten weeks. The exercises focused on fostering social integration using either tasks improving team spirit or exercises that increase tolerance and sense of belonging. Other exercises focused on strengthening the self-esteem of the students. Additionally, the program included a poster exhibition titled 'Belonging' which was put up in the school buildings. The exhibition comprised ten posters which addressed relevant risk factors and resources related to social integration and self-esteem. The study was a randomized controlled sequential study with a pre and post measurement conducted in ten German schools. A total of 1642 students (44% male) were recruited. Their age ranged from 9 to 21 years (M=12.93 years; SD= 2.11). The program was conducted in classes ranging from 5th to 12th grade. Results: The program improved wellbeing, self-esteem and social integration of the involved students compared to the control group. Differential effects depending on implementation rates or age of the students will be analyzed. Moreover, implications for future school-based health promotion programs targeting self-esteem and social integration will be discussed. Conclusion: Social integration considerably influences self-esteem and well-being of students and can be targeted by school-based programs including short and modest exercises. Since a sufficient implementation of health promotion programs is essential, the present program due to its practicability represents a good opportunity to install health promotion focusing on social integration in schools.Keywords: social integration, well-being, health promotion in schools, self-esteem
Procedia PDF Downloads 196529 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion
Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong
Abstract:
The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor
Procedia PDF Downloads 231528 Assessing Information Dissemination Of Group B Streptococcus In Antenatal Clinics, and Obstetricians and Midwives’ Opinions on the Importance of Doing so
Authors: Aakriti Chetan Shah, Elle Sein
Abstract:
Background/purpose: Group B Streptococcus(GBS) is the leading cause of severe early onset infection in newborns, with the incidence of Early Onset Group B Streptococcus (EOGBS) in the UK and Ireland rising from 0.48 to 0.57 per 1000 births from 2000 to 2015. A WHO study conducted in 2017, has shown that 38.5% of cases can result in stillbirth and infant deaths. This is an important problem to consider as 20% of women worldwide have GBS colonisation and can suffer from these detrimental effects. Current Royal College of Obstetricians and Midwives (RCOG) guidelines do not recommend bacteriological screening for pregnant women due to its low sensitivity in antenatal screening correlating with the neonate having GBS but advise a patient information leaflet be given to pregnant women. However, a Healthcare Safety Investigation Branch (HSIB) 2019 learning report found that only 50% of trusts and health boards reported giving GBS information leaflets to all pregnant mothers. Therefore, this audit aimed to assess current practices of information dissemination about GBS at Chelsea & Westminster (C&W) Hospital. Methodology: A quantitative cross-sectional study was carried out using a questionnaire based on the RCOG GBS guidelines and the HSIB Learning report. The study was conducted in antenatal clinics at Chelsea & Westminster Hospital, from 29th January 2021 to 14th February 2021, with twenty-two practicing obstetricians and midwives participating in the survey. The main outcome measure was the proportion of obstetricians and midwives who disseminate information about GBS to pregnant women, and the reasons behind why they do or do not. Results: 22 obstetricians and midwives responded with 18 complete responses. Of which 12 were obstetricians and 6 were midwives. Only 17% of clinical staff routinely inform all pregnant women about GBS, and do so at varying timeframes of the pregnancy, with an equal split in the first, second and third trimester. The primary reason for not informing women about GBS was influenced by three key factors: Deemed relevant only for patients at high risk of GBS, lack of time in clinic appointments and no routine NHS screening available. Interestingly 58% of staff in the antenatal clinic believe it is necessary to inform all women about GBS and its importance. Conclusion: It is vital for obstetricians and midwives to inform all pregnant women about GBS due to the high prevalence of incidental carriers in the population, and the harmful effects it can cause for neonates. Even though most clinicians believe it is important to inform all pregnant women about GBS, most do not. To ensure that RCOG and HSIB recommendations are followed, we recommend that women should be given this information at 28 weeks gestation in the antenatal clinic. Proposed implementations include an information leaflet to be incorporated into the Mum and Baby app, an informative video and end-to-end digital clinic documentation to include this information sharing prompt.Keywords: group B Streptococcus, early onset sepsis, Antenatal care, Neonatal morbidity, GBS
Procedia PDF Downloads 178527 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs
Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.
Abstract:
Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification
Procedia PDF Downloads 124526 Tip-Apex Distance as a Long-Term Risk Factor for Hospital Readmission Following Intramedullary Fixation of Intertrochanteric Fractures
Authors: Brandon Knopp, Matthew Harris
Abstract:
Purpose: Tip-apex distance (TAD) has long been discussed as a metric for determining risk of failure in the fixation of peritrochanteric fractures. TAD measurements over 25 millimeters (mm) have been associated with higher rates of screw cut out and other complications in the first several months after surgery. However, there is limited evidence for the efficacy of this measurement in predicting the long-term risk of negative outcomes following hip fixation surgery. The purpose of our study was to investigate risk factors including TAD for hospital readmission, loss of pre-injury ambulation and development of complications within 1 year after hip fixation surgery. Methods: A retrospective review of proximal hip fractures treated with single screw intramedullary devices between 2016 and 2020 was performed at a 327-bed regional medical center. Patients included had a postoperative follow-up of at least 12 months or surgery-related complications developing within that time. Results: 44 of the 67 patients in this study met the inclusion criteria with adequate follow-up post-surgery. There was a total of 10 males (22.7%) and 34 females (77.3%) meeting inclusion criteria with a mean age of 82.1 (± 12.3) at the time of surgery. The average TAD in our study population was 19.57mm and the average 1-year readmission rate was 15.9%. 3 out of 6 patients (50%) with a TAD > 25mm were readmitted within one year due to surgery-related complications. In contrast, 3 out of 38 patients (7.9%) with a TAD < 25mm were readmitted within one year due to surgery-related complications (p=0.0254). Individual TAD measurements, averaging 22.05mm in patients readmitted within 1 year of surgery and 19.18mm in patients not readmitted within 1 year of surgery, were not significantly different between the two groups (p=0.2113). Conclusions: Our data indicate a significant improvement in hospital readmission rates up to one year after hip fixation surgery in patients with a TAD < 25mm with a decrease in readmissions of over 40% (50% vs 7.9%). This result builds upon past investigations by extending the follow-up time to 1 year after surgery and utilizing hospital readmissions as a metric for surgical success. With the well-documented physical and financial costs of hospital readmission after hip surgery, our study highlights a reduction of TAD < 25mm as an effective method of improving patient outcomes and reducing financial costs to patients and medical institutions. No relationship was found between TAD measurements and secondary outcomes, including loss of pre-injury ambulation and development of complications.Keywords: hip fractures, hip reductions, readmission rates, open reduction internal fixation
Procedia PDF Downloads 144