Search results for: single protoplast
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4583

Search results for: single protoplast

353 Re-Entrant Direct Hexagonal Phases in a Lyotropic System Induced by Ionic Liquids

Authors: Saheli Mitra, Ramesh Karri, Praveen K. Mylapalli, Arka. B. Dey, Gourav Bhattacharya, Gouriprasanna Roy, Syed M. Kamil, Surajit Dhara, Sunil K. Sinha, Sajal K. Ghosh

Abstract:

The most well-known structures of lyotropic liquid crystalline systems are the two dimensional hexagonal phase of cylindrical micelles with a positive interfacial curvature and the lamellar phase of flat bilayers with zero interfacial curvature. In aqueous solution of surfactants, the concentration dependent phase transitions have been investigated extensively. However, instead of changing the surfactant concentrations, the local curvature of an aggregate can be altered by tuning the electrostatic interactions among the constituent molecules. Intermediate phases with non-uniform interfacial curvature are still unexplored steps to understand the route of phase transition from hexagonal to lamellar. Understanding such structural evolution in lyotropic liquid crystalline systems is important as it decides the complex rheological behavior of the system, which is one of the main interests of the soft matter industry. Sodium dodecyl sulfate (SDS) is an anionic surfactant and can be considered as a unique system to tune the electrostatics by cationic additives. In present study, imidazolium-based ionic liquids (ILs) with different number of carbon atoms in their single hydrocarbon chain were used as the additive in the aqueous solution of SDS. At a fixed concentration of total non-aqueous components (SDS and IL), the molar ratio of these components was changed, which effectively altered the electrostatic interactions between the SDS molecules. As a result, the local curvature is observed to modify, and correspondingly, the structure of the hexagonal liquid crystalline phases are transformed into other phases. Polarizing optical microscopy of SDS and imidazole-based-IL systems have exhibited different textures of the liquid crystalline phases as a function of increasing concentration of the ILs. The small angle synchrotron x-ray diffraction (SAXD) study has indicated the hexagonal phase of direct cylindrical micelles to transform to a rectangular phase at the presence of short (two hydrocarbons) chain IL. However, the hexagonal phase is transformed to a lamellar phase at the presence of long (ten hydrocarbons) chain IL. Interestingly, at the presence of a medium (four hydrocarbons) chain IL, the hexagonal phase is transformed to another hexagonal phase of direct cylindrical micelles through the lamellar phase. To the best of our knowledge, such a phase sequence has not been reported earlier. Even though the small angle x-ray diffraction study has revealed the lattice parameters of these phases to be similar to each other, their rheological behavior has been distinctly different. These rheological studies have shed lights on how these phases differ in their viscoelastic behavior. Finally, the packing parameters, calculated for these phases based on the geometry of the aggregates, have explained the formation of the self-assembled aggregates.

Keywords: lyotropic liquid crystals, polarizing optical microscopy, rheology, surfactants, small angle x-ray diffraction

Procedia PDF Downloads 136
352 Tall Building Transit-Oriented Development (TB-TOD) and Energy Efficiency in Suburbia: Case Studies, Sydney, Toronto, and Washington D.C.

Authors: Narjes Abbasabadi

Abstract:

As the world continues to urbanize and suburbanize, where suburbanization associated with mass sprawl has been the dominant form of this expansion, sustainable development challenges will be more concerned. Sprawling, characterized by low density and automobile dependency, presents significant environmental issues regarding energy consumption and Co2 emissions. This paper examines the vertical expansion of suburbs integrated into mass transit nodes as a planning strategy for boosting density, intensification of land use, conversion of single family homes to multifamily dwellings or mixed use buildings and development of viable alternative transportation choices. It analyzes the spatial patterns of tall building transit-oriented development (TB-TOD) of suburban regions in Sydney (Australia), Toronto (Canada), and Washington D.C. (United States). The main objectives of this research seek to understand the effect of the new morphology of suburban tall, the physical dimensions of individual buildings and their arrangement at a larger scale with energy efficiency. This study aims to answer these questions: 1) why and how can the potential phenomenon of vertical expansion or high-rise development be integrated into suburb settings? 2) How can this phenomenon contribute to an overall denser development of suburbs? 3) Which spatial pattern or typologies/ sub-typologies of the TB-TOD model do have the greatest energy efficiency? It addresses these questions by focusing on 1) energy, heat energy demand (excluding cooling and lighting) related to design issues at two levels: macro, urban scale and micro, individual buildings—physical dimension, height, morphology, spatial pattern of tall buildings and their relationship with each other and transport infrastructure; 2) Examining TB-TOD to provide more evidence of how the model works regarding ridership. The findings of the research show that the TB-TOD model can be identified as the most appropriate spatial patterns of tall buildings in suburban settings. And among the TB-TOD typologies/ sub-typologies, compact tall building blocks can be the most energy efficient one. This model is associated with much lower energy demands in buildings at the neighborhood level as well as lower transport needs in an urban scale while detached suburban high rise or low rise suburban housing will have the lowest energy efficiency. The research methodology is based on quantitative study through applying the available literature and static data as well as mapping and visual documentations of urban regions such as Google Earth, Microsoft Bing Bird View and Streetview. It will examine each suburb within each city through the satellite imagery and explore the typologies/ sub-typologies which are morphologically distinct. The study quantifies heat energy efficiency of different spatial patterns through simulation via GIS software.

Keywords: energy efficiency, spatial pattern, suburb, tall building transit-oriented development (TB-TOD)

Procedia PDF Downloads 259
351 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction

Authors: Bruce Wrightsman

Abstract:

Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.

Keywords: wood building systems, material histories, monocoque systems, construction waste

Procedia PDF Downloads 77
350 Therapeutic Potential of GSTM2-2 C-Terminal Domain and Its Mutants, F157A and Y160A on the Treatment of Cardiac Arrhythmias: Effect on Ca2+ Transients in Neonatal Ventricular Cardiomyocytes

Authors: R. P. Hewawasam, A. F. Dulhunty

Abstract:

The ryanodine receptor (RyR) is an intracellular ion channel that releases Ca2+ from the sarcoplasmic reticulum and is essential for the excitation-contraction coupling and contraction in striated muscle. Human muscle specific glutathione transferase M2-2 (GSTM2-2) is a highly specific inhibitor of cardiac ryanodine receptor (RyR2) activity. Single channel-lipid bilayer studies and Ca2+ release assays performed using the C-terminal half of the GSTM2-2 and its mutants F157A and Y160A confirmed the ability of the C terminal domain of GSTM2-2 to specifically inhibit the cardiac ryanodine receptor activity. Objective of the present study is to determine the effect of C terminal domain of GSTM2-2 (GSTM2-2C) and the mutants, F157A and Y160A on the Ca2+ transients of neonatal ventricular cardiomyocytes. Primary cardiomyocytes were cultured from neonatal rats. They were treated with GSTM2-2C and the two mutants F157A and Y160A at 15µM and incubated for 2 hours. Then the cells were led with Fluo-4AM, fluorescent Ca2+ indicator, and the field stimulated (1 Hz, 3V and 2ms) cells were excited using the 488 nm argon laser. Contractility of the cells were measured and the Ca2+ transients in the stained cells were imaged using Leica SP5 confocal microscope. Peak amplitude of the Ca2+ transient, rise time and decay time from the peak were measured for each transient. In contrast to GSTM2C which significantly reduced the % shortening (42.8%) in the field stimulated cells, F157A and Y160A failed to reduce the % shortening.Analysis revealed that the average amplitude of the Ca2+ transient was significantly reduced (P<0.001) in cells treated with the wild type GSTM2-2C compared to that of untreated cells. Cells treated with the mutants F157A and Y160A didn’t change the Ca2+ transient significantly compared to the control. A significant increase in the rise time (P< 0.001) and a significant reduction in the decay time (P< 0.001) were observed in cardiomyocytes treated with GSTM2-2C compared to the control but not with F157A and Y160A. These results are consistent with the observation that GSTM2-2C reduced the Ca2+ release from the cardiac SR significantly whereas the mutants, F157A and Y160A didn’t show any effect compared to the control. GSTM2-2C has an isoform-specific effect on the cardiac ryanodine receptor activity and also it inhibits RyR2 channel activity only during diastole. Selective inhibition of RyR2 by GSTM2-2C has significant clinical potential in the treatment of cardiac arrhythmias and heart failure. Since GSTM2-2C-terminal construct has no GST enzyme activity, its introduction to the cardiomyocyte would not exert any unwanted side effects that may alter its enzymatic action. The present study further confirms that GSTM2-2C is capable of decreasing the Ca2+ release from the cardiac SR during diastole. These results raise the future possibility of using GSTM2-2C as a template for therapeutics that can depress RyR2 function when the channel is hyperactive in cardiac arrhythmias and heart failure.

Keywords: arrhythmia, cardiac muscle, cardiac ryanodine receptor, GSTM2-2

Procedia PDF Downloads 283
349 Time Travel Testing: A Mechanism for Improving Renewal Experience

Authors: Aritra Majumdar

Abstract:

While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.

Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas

Procedia PDF Downloads 158
348 Comparative Study of Active Release Technique and Myofascial Release Technique in Patients with Upper Trapezius Spasm

Authors: Harihara Prakash Ramanathan, Daksha Mishra, Ankita Dhaduk

Abstract:

Relevance: This qualitative study will educate the clinician in putting into practice the advanced method of movement science in restoring the function. Purpose: The purpose of this study is to compare the effectiveness of Active Release Technique and myofascial release technique on range of motion, neck function and pain in patients with upper trapezius spasm. Methods/Analysis: The study was approved by the institutional Human Research and Ethics committee. This study included sixty patients of age group between 20 to 55 years with upper trapezius spasm. Patients were randomly divided into two groups receiving Active Release Technique (Group A) and Myofascial Release Technique (Group B). The patients were treated for 1 week and three outcome measures ROM, pain and functional level were measured using Goniometer, Visual analog scale(VAS), Neck disability Index Questionnaire(NDI) respectively. Paired Sample 't' test was used to compare the differences of pre and post intervention values of Cervical Range of motion, Neck disability Index, Visual analog scale of Group A and Group B. Independent't' test was used to compare the differences between two groups in terms of improvement in cervical range of motion, decrease in visual analogue scale(VAS), decrease in Neck disability index score. Results: Both the groups showed statistically significant improvements in cervical ROM, reduction in pain and in NDI scores. However, mean change in Cervical flexion, cervical extension, right side flexion, left side flexion, right side rotation, left side rotation, pain, neck disability level showed statistically significant improvement (P < 0. 05)) in the patients who received Active Release Technique as compared to Myofascial release technique. Discussion and conclusions: In present study, the average improvement immediately post intervention is significantly greater as compared to before treatment but there is even more improvement after seven sessions as compared to single session. Hence, this proves that several sessions of Manual techniques are necessary to produce clinically relevant results. Active release technique help to reduce the pain threshold by removing adhesion and promote normal tissue extensibility. The act of tensioning and compressing the affected tissue both with digital contact and through the active movement performed by the patient can be a plausible mechanism for tissue healing in this study. This study concluded that both Active Release Technique (ART) and Myofascial release technique (MFR) are equally effective in managing upper trapezius muscle spasm, but more improvement can be achieved by Active Release Technique (ART). Impact and Implications: Active Release Technique can be adopted as mainstay of treatment approach in treating trapezius spasm for faster relief and improving the functional status.

Keywords: trapezius spasm, myofascial release, active release technique, pain

Procedia PDF Downloads 272
347 Co₂Fe LDH on Aromatic Acid Functionalized N Doped Graphene: Hybrid Electrocatalyst for Oxygen Evolution Reaction

Authors: Biswaranjan D. Mohapatra, Ipsha Hota, Swarna P. Mantry, Nibedita Behera, Kumar S. K. Varadwaj

Abstract:

Designing highly active and low-cost oxygen evolution (2H₂O → 4H⁺ + 4e⁻ + O₂) electrocatalyst is one of the most active areas of advanced energy research. Some precious metal-based electrocatalysts, such as IrO₂ and RuO₂, have shown excellent performance for oxygen evolution reaction (OER); however, they suffer from high-cost and low abundance which limits their applications. Recently, layered double hydroxides (LDHs), composed of layers of divalent and trivalent transition metal cations coordinated to hydroxide anions, have gathered attention as an alternative OER catalyst. However, LDHs are insulators and coupled with carbon materials for the electrocatalytic applications. Graphene covalently doped with nitrogen has been demonstrated to be an excellent electrocatalyst for energy conversion technologies such as; oxygen reduction reaction (ORR), oxygen evolution reaction (OER) & hydrogen evolution reaction (HER). However, they operate at high overpotentials, significantly above the thermodynamic standard potentials. Recently, we reported remarkably enhanced catalytic activity of benzoate or 1-pyrenebutyrate functionalized N-doped graphene towards the ORR in alkaline medium. The molecular and heteroatom co-doping on graphene is expected to tune the electronic structure of graphene. Therefore, an innovative catalyst architecture, in which LDHs are anchored on aromatic acid functionalized ‘N’ doped graphene may presumably boost the OER activity to a new benchmark. Herein, we report fabrication of Co₂Fe-LDH on aromatic acid (AA) functionalized ‘N’ doped reduced graphene oxide (NG) and studied their OER activities in alkaline medium. In the first step, a novel polyol method is applied for synthesis of AA functionalized NG, which is well dispersed in aqueous medium. In the second step, Co₂Fe LDH were grown on AA functionalized NG by co-precipitation method. The hybrid samples are abbreviated as Co₂Fe LDH/AA-NG, where AA is either Benzoic acid or 1, 3-Benzene dicarboxylic acid (BDA) or 1, 3, 5 Benzene tricarboxylic acid (BTA). The crystal structure and morphology of the samples were characterized by X-ray diffraction (XRD), scanning electron microscope (SEM) and transmission electron microscope (TEM). These studies confirmed the growth of layered single phase LDH. The electrocatalytic OER activity of these hybrid materials was investigated by rotating disc electrode (RDE) technique on a glassy carbon electrode. The linear sweep voltammetry (LSV) on these catalyst samples were taken at 1600rpm. We observed significant OER performance enhancement in terms of onset potential and current density on Co₂Fe LDH/BTA-NG hybrid, indicating the synergic effect. This exploration of molecular functionalization effect in doped graphene and LDH system may provide an excellent platform for innovative design of OER catalysts.

Keywords: π-π functionalization, layered double hydroxide, oxygen evolution reaction, reduced graphene oxide

Procedia PDF Downloads 205
346 Inhibition of Influenza Replication through the Restrictive Factors Modulation by CCR5 and CXCR4 Receptor Ligands

Authors: Thauane Silva, Gabrielle do Vale, Andre Ferreira, Marilda Siqueira, Thiago Moreno L. Souza, Milene D. Miranda

Abstract:

The exposure of A(H1N1)pdm09-infected epithelial cells (HeLa) to HIV-1 viral particles, or its gp120, enhanced interferon-induced transmembrane protein (IFITM3) content, a viral restriction factor (RF), resulting in a decrease in influenza replication. The gp120 binds to CCR5 (R5) or CXCR4 (X4) cell receptors during HIV-1 infection. Then, it is possible that the endogenous ligands of these receptors also modulate the expression of IFITM3 and other cellular factors that restrict influenza virus replication. Thus, the aim of this study is to analyze the role of cellular receptors R5 and X4 in modulating RFs in order to inhibit the replication of the influenza virus. A549 cells were treated with 2x effective dose (ED50) of endogenous R5 or X4 receptor agonists, CCL3 (20 ng/ml), CCL4 (10 ng/ml), CCL5 (10 ng/ml) and CXCL12 (100 ng/mL) or exogenous agonists, gp120 Bal-R5, gp120 IIIB-X4 and its mutants (5 µg/mL). The interferon α (10 ng/mL) and oseltamivir (60 nM) were used as a control. After 24 h post agonists exposure, the cells were infected with virus influenza A(H3N2) at 2 MOI (multiplicity of infection) for 1 h. Then, 24 h post infection, the supernatant was harvested and, the viral titre was evaluated by qRT-PCR. To evaluate IFITM3 and SAM and HD domain containing deoxynucleoside triphosphate triphosphohydrolase 1 (SAMHD1) protein levels, A549 were exposed to agonists for 24 h, and the monolayer was lysed with Laemmli buffer for western blot (WB) assay or fixed for indirect immunofluorescence (IFI) assay. In addition to this, we analyzed other RFs modulation in A549, after 24 h post agonists exposure by customized RT² Profiler Polymerase Chain Reaction Array. We also performed a functional assay in which SAMHD1-knocked-down, by single-stranded RNA (siRNA), A549 cells were infected with A(H3N2). In addition, the cells were treated with guanosine to assess the regulatory role of dNTPs by SAMHD1. We found that R5 and X4 agonists inhibited influenza replication in 54 ± 9%. We observed a four-fold increase in SAMHD1 transcripts by RFs mRNA quantification panel. After 24 h post agonists exposure, we did not observe an increase in IFITM3 protein levels through WB or IFI assays, but we observed an upregulation up to three-fold in the protein content of SAMHD1, in A549 exposed to agonists. Besides this, influenza replication enhanced in 20% in cell cultures that SAMDH1 was knockdown. Guanosine treatment in cells exposed to R5 ligands further inhibited influenza virus replication, suggesting that the inhibitory mechanism may involve the activation of the SAMHD1 deoxynucleotide triphosphohydrolase activity. Thus, our data show for the first time a direct relationship of SAMHD1 and inhibition of influenza replication, and provides perspectives for new studies on the signaling modulation, through cellular receptors, to induce proteins of great importance in the control of relevant infections for public health.

Keywords: chemokine receptors, gp120, influenza, virus restriction factors

Procedia PDF Downloads 140
345 Birth Weight, Weight Gain and Feeding Pattern as Predictors for the Onset of Obesity in School Children

Authors: Thimira Pasas P, Nirmala Priyadarshani M, Ishani R

Abstract:

Obesity is a global health issue. Early identification is essential to plan interventions and intervene than to reduce the worsening of obesity and its consequences on the health issues of the individual. Childhood obesity is multifactorial, with both modifiable and unmodifiable risk factors. A genetically susceptible individual (unmodifiable), when placed in an obesogenic environment (modifiable), is likely to become obese in onset and progression. The present study was conducted to identify the age of onset of childhood obesity and the influence of modifiable risk factors for childhood obesity among school children living in a suburban area of Sri Lanka. The study population was aged 11-12 years of Piliyandala Educational Zone. Data were collected from 11–12-year-old school children attending government schools in the Piliyandala Educational Zone. They were using a validated, pre-tested self-administered questionnaire. A stratified random sampling method was performed to select schools and to select a representative sample to include all 3 types of government schools of students due to the prevailing pandemic situation, information from the last school medical inspection on data from 2020used for this purpose. For each obese child identified, 2 non-obese children were selected as controls. A single representative from the area was selected by using a systematic random sampling method with a sampling interval of 3. Data was collected using a validated, pre-tested self-administered questionnaire and the Child Health Development Record of the child. An introduction, which included explanations and instructions for filing the questionnaire, was carried out as a group activity prior to distributing the questionnaire among the sample. The results of the present study aligned with the hypothesis that the age of onset of childhood obesity and prediction must be within the first two years of child life. A total of 130 children (66 males: 64 females) participated in the study. The age of onset of obesity was seen to be within the first two years of life. The risk of obesity at 11-12 years of age was Obesity risk was identified at 3-time s higher among females who underwent rapid weight gain within their infancy period. Consuming milk prior to breakfast emerged as a risk factor that increases the risk of obesity by three times. The current study found that the drink before breakfast tends to increase the obesity risk by 3-folds, especially among obese females. Proper monitoring must be carried out to identify the rapid weight gain, especially within the first 2 years of life. Consumption of mug milk before breakfast tends to increase the obesity risk by 3 times. Identification of the confounding factors, proper awareness of the mothers/guardians and effective proper interventions need to be carried out to reduce the obesity risk among school children in the future.

Keywords: childhood obesity, school children, age of onset, weight gain, feeding pattern, activity level

Procedia PDF Downloads 139
344 Auto Surgical-Emissive Hand

Authors: Abhit Kumar

Abstract:

The world is full of master slave Telemanipulator where the doctor’s masters the console and the surgical arm perform the operations, i.e. these robots are passive robots, what the world needs to focus is that in use of these passive robots we are acquiring doctors for operating these console hence the utilization of the concept of robotics is still not fully utilized ,hence the focus should be on active robots, Auto Surgical-Emissive Hand use the similar concept of active robotics where this anthropomorphic hand focuses on the autonomous surgical, emissive and scanning operation, enabled with the vision of 3 way emission of Laser Beam/-5°C < ICY Steam < 5°C/ TIC embedded in palm of the anthropomorphic hand and structured in a form of 3 way disc. Fingers of AS-EH (Auto Surgical-Emissive Hand) as called, will have tactile, force, pressure sensor rooted to it so that the mechanical mechanism of force, pressure and physical presence on the external subject can be maintained, conversely our main focus is on the concept of “emission” the question arises how all the 3 non related methods will work together that to merged in a single programmed hand, all the 3 methods will be utilized according to the need of the external subject, the laser if considered will be emitted via a pin sized outlet, this radiation is channelized via a thin channel which further connect to the palm of the surgical hand internally leading to the pin sized outlet, here the laser is used to emit radiation enough to cut open the skin for removal of metal scrap or any other foreign material while the patient is in under anesthesia, keeping the complexity of the operation very low, at the same time the TIC fitted with accurate temperature compensator will be providing us the real time feed of the surgery in the form of heat image, this gives us the chance to analyze the level, also ATC will help us to determine the elevated body temperature while the operation is being proceeded, the thermal imaging camera in rooted internally in the AS-EH while also being connected to the real time software externally to provide us live feedback. The ICY steam will provide the cooling effect before and after the operation, however for more utilization of this concept we can understand the working of simple procedure in which If a finger remain in icy water for a long time it freezes the blood flow stops and the portion become numb and isolated hence even if you try to pinch it will not provide any sensation as the nerve impulse did not coordinated with the brain hence sensory receptor did not got active which means no sense of touch was observed utilizing the same concept we can use the icy stem to be emitted via a pin sized hole on the area of concern ,temperature below 273K which will frost the area after which operation can be done, this steam can also be use to desensitized the pain while the operation in under process. The mathematical calculation, algorithm, programming of working and movement of this hand will be installed in the system prior to the procedure, since this AS-EH is a programmable hand it comes with the limitation hence this AS-EH robot will perform surgical process of low complexity only.

Keywords: active robots, algorithm, emission, icy steam, TIC, laser

Procedia PDF Downloads 356
343 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method

Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov

Abstract:

The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.

Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection

Procedia PDF Downloads 213
342 Comparative Economic Evaluation of Additional Respiratory Resources Utilized after Methylxanthine Initiation for the Treatment of Apnea of Prematurity in a South Asian Country

Authors: Shivakumar M, Leslie Edward S Lewis, Shashikala Devadiga, Sonia Khurana

Abstract:

Introduction: Methylxanthines are used for the treatment of AOP, to facilitate extubation and as a prophylactic agent to prevent apnea. Though the popularity of Caffeine has risen, it is expensive in a resource constrained developing countries like India. Objective: To evaluate the cost-effectiveness of Caffeine compared with Aminophylline treatment for AOP with respect to additional ventilatory resource utilized in different birth weight categorization. Design, Settings and Participants – Single centered, retrospective economic evaluation was done. Participants included preterm newborns with < 34 completed weeks of gestation age that were recruited under an Indian Council of Medical Research funded randomized clinical trial. Per protocol data was included from Neonatal Intensive Care Unit, Kasturba Hospital, Manipal, India between April 2012 and December 2014. Exposure: Preterm neonates were randomly allocated to either Caffeine or Aminophylline as per the trial protocol. Outcomes and Measures – We assessed surfactant requirement, duration of Invasive and Non-Invasive Ventilation, Total Methylxanthine cost and additional cost for respiratory support bared by the payers per day during hospital stay. For the purpose of this study Newborns were stratified as Category A – < 1000g, Category B – 1001 to 1500g and Category C – 1501 to 2500g. Results: Total 146 (Caffeine -72 and Aminophylline – 74) babies with Mean ± SD gestation age of 29.63 ± 1.89 weeks were assessed. 32.19% constitute of Category A, 55.48% were B and 12.33% were C. The difference in median duration of additional NIV and IMV support was statistically insignificant. However 60% of neonates who received Caffeine required additional surfactant therapy (p=0.02). The total median (IQR) cost of Caffeine was significantly high with Rs.10535 (Q3-6317.50, Q1-15992.50) where against Aminophylline cost was Rs.352 (Q3-236, Q1-709) (p < 0.001). The additional costs spent on respiratory support per day in neonates on either Methylxanthines were found to be statistically insignificant in the entire weight based category of our study. Whereas in Category B, the median O2 charges per day were found to have more in Caffeine treated newborns (p=0.05) with border line significance. In category A, providing one day NIV or IMV support significantly increases the unit log cost of Caffeine by 13.6% (CI – 95% ranging from 4 to 24; p=0.005) over log cost of Aminophylline. Conclusion: Cost of Caffeine is expensive than Aminophylline. It was found to be equally efficacious in reducing the number duration of NIV or IMV support. However adjusted with the NIV and IMV days of support, neonates fall in category A and category B who were on Caffeine pays excess amount of respiratory charges per day over aminophylline. In perspective of resource poor settings Aminophylline is cost saving and economically approachable.

Keywords: methylxanthines include caffeine and aminophylline, AOP (apnea of prematurity), IMV (invasive mechanical ventilation), NIV (non invasive ventilation), category a – <1000g, category b – 1001 to 1500g and category c – 1501 to 2500g

Procedia PDF Downloads 432
341 Maternal Risk Factors Associated with Low Birth Weight Neonates in Pokhara, Nepal: A Hospital Based Case Control Study

Authors: Dipendra Kumar Yadav, Nabaraj Paudel, Anjana Yadav

Abstract:

Background: Low Birth weight (LBW) is defined as the weight at birth less than 2500 grams, irrespective of the period of their gestation. LBW is an important indicator of general health status of population and is considered as the single most important predictors of infant mortality especially of deaths within the first month of life that is birth weight determines the chances of newborn survival. Objective of this study was to identify the maternal risk factors associated with low birth weight neonates. Materials and Methods: A hospital based case-control study was conducted in maternity ward of Manipal Teaching Hospital, Pokhara, Nepal from 23 September 2014 to 12 November 2014. During study period 59 cases were obtained and twice number of control group were selected with frequency matching of the mother`s age with ± 3 years and total controls were 118. Interview schedule was used for data collection along with record review. Data were entered in Epi-data program and analysis was done with help of SPSS software program. Results: From bivariate logistic regression analysis, eighteen variables were found significantly associated with LBW and these were place of residence, family monthly income, education, previous still birth, previous LBW, history of STD, history of vaginal bleeding, anemia, ANC visits, less than four ANC visits, de-worming status, counseling during pregnancy, CVD, physical workload, stress, extra meal during pregnancy, smoking and alcohol consumption status. However after adjusting confounding variables, only six variables were found significantly associated with LBW. Mothers who had family monthly income up to ten thousand rupees were 4.83 times more likely to deliver LBW with CI (1.5-40.645) and p value 0.014 compared to mothers whose family income NRs.20,001-60,000. Mothers who had previous still birth were 2.01 times more likely to deliver LBW with CI (0.69-5.87) and p value 0.02 compared to mothers who did not has previous still birth. Mothers who had previous LBW were 5.472 times more likely to deliver LBW with CI (1.2-24.93) and p value 0.028 compared to mothers who did not has previous LBW. Mothers who had anemia during pregnancy were 3.36 times more likely to deliver LBW with CI (0.77-14.57) and p value 0.014 compared to mothers who did not has anemia. Mothers who delivered female newborn were 2.96 times more likely to have LBW with 95% CI (1.27-7.28) and p value 0.01 compared to mothers who deliver male newborn. Mothers who did not get extra meal during pregnancy were 6.04 times more likely to deliver LBW with CI (1.11-32.7) and p value 0.037 compared to mothers who getting the extra meal during pregnancy. Mothers who consumed alcohol during pregnancy were 4.83 times more likely to deliver LBW with CI (1.57-14.83) and p value 0.006 compared to mothers who did not consumed alcohol during pregnancy. Conclusions: To reduce low birth weight baby through economic empowerment of family and individual women. Prevention and control of anemia during pregnancy is one of the another strategy to control the LBW baby and mothers should take full dose of iron supplements with screening of haemoglobin level. Extra nutritional food should be provided to women during pregnancy. Health promotion program will be focused on avoidance of alcohol and strengthen of health services that leads increasing use of maternity services.

Keywords: low birth weight, case-control, risk factors, hospital based study

Procedia PDF Downloads 299
340 Carbon Aerogels with Tailored Porosity as Cathode in Li-Ion Capacitors

Authors: María Canal-Rodríguez, María Arnaiz, Natalia Rey-Raap, Ana Arenillas, Jon Ajuria

Abstract:

The constant demand of electrical energy, as well as the increase in environmental concern, lead to the necessity of investing in clean and eco-friendly energy sources that implies the development of enhanced energy storage devices. Li-ion batteries (LIBs) and Electrical double layer capacitors (EDLCs) are the most widespread energy systems. Batteries are able to storage high energy densities contrary to capacitors, which main strength is the high-power density supply and the long cycle life. The combination of both technologies gave rise to Li-ion capacitors (LICs), which offers all these advantages in a single device. This is achieved combining a capacitive, supercapacitor-like positive electrode with a faradaic, battery-like negative electrode. Due to the abundance and affordability, dual carbon-based LICs are nowadays the common technology. Normally, an Active Carbon (AC) is used as the EDLC like electrode, while graphite is the material commonly employed as anode. LICs are potential systems to be used in applications in which high energy and power densities are required, such us kinetic energy recovery systems. Although these devices are already in the market, some drawbacks like the limited power delivered by graphite or the energy limiting nature of AC must be solved to trigger their used. Focusing on the anode, one possibility could be to replace graphite with Hard Carbon (HC). The better rate capability of the latter increases the power performance of the device. Moreover, the disordered carbonaceous structure of HCs enables storage twice the theoretical capacity of graphite. With respect to the cathode, the ACs are characterized for their high volume of micropores, in which the charge is storage. Nevertheless, they normally do not show mesoporous, which are really important mainly at high C-rates as they act as transport channels for the ions to reach the micropores. Usually, the porosity of ACs cannot be tailored, as it strongly depends on the precursor employed to get the final carbon. Moreover, they are not characterized for having a high electrical conductivity, which is an important characteristic to get a good performance in energy storage applications. A possible candidate to substitute ACs are carbon aerogels (CAs). CAs are materials that combine a high porosity with great electrical conductivity, opposite characteristics in carbon materials. Furthermore, its porous properties can be tailored quite accurately according to with the requirements of the application. In the present study, CAs with controlled porosity were obtained from polymerization of resorcinol and formaldehyde by microwave heating. Varying the synthesis conditions, mainly the amount of precursors and pH of the precursor solution, carbons with different textural properties were obtained. The way the porous characteristics affect the performance of the cathode was studied by means of a half-cell configuration. The material with the best performance was evaluated as cathode in a LIC versus a hard carbon as anode. An analogous full LIC made by a high microporous commercial cathode was also assembled for comparison purposes.

Keywords: li-ion capacitors, energy storage, tailored porosity, carbon aerogels

Procedia PDF Downloads 165
339 Patterns of Libido, Sexual Activity and Sexual Performance in Female Migraineurs

Authors: John Farr Rothrock

Abstract:

Although migraine traditionally has been assumed to convey a relative decrease in libido, sexual activity and sexual performance, recent data have suggested that the female migraine population is far from homogenous in this regard. We sought to determine the levels of libido, sexual activity and sexual performance in the female migraine patient population both generally and according to clinical phenotype. In this single-blind study, a consecutive series of sexually active new female patients ages 25-55 initially presenting to a university-based headache clinic and having a >1 year history of migraine were asked to complete anonymously a survey assessing their sexual histories generally and as they related to their headache disorder and the 19-item Female Sexual Function Index (FSFI). To serve as 2 separate control groups, 100 sexually active females with no history of migraine and 100 female migraineurs from the general (non-clinic) population but matched for age, marital status, educational background and socioeconomic status completed a similar survey. Over a period of 3 months, 188 consecutive migraine patients were invited to participate. Twenty declined, and 28 of the remaining 160 potential subjects failed to meet the inclusion criterion utilized for “sexually active” (ie, heterosexual intercourse at a frequency of > once per month in each of the preceding 6 months). In all groups younger age (p<.005), higher educational level attained (p<.05) and higher socioeconomic status (p<.025) correlated with a higher monthly frequency of intercourse and a higher likelihood of intercourse resulting in orgasm. Relative to the 100 control subjects with no history of migraine, the two migraine groups (total n=232) reported a lower monthly frequency of intercourse and recorded a lower FSFI score (both p<.025), but the contribution to this difference came primarily from the chronic migraine (CM) subgroup (n=92). Patients with low frequency episodic migraine (LFEM) and mid frequency episodic migraine (MFEM) reported a higher FSFI score, higher monthly frequency of intercourse, higher likelihood of intercourse resulting in orgasm and higher likelihood of multiple active sex partners than controls. All migraine subgroups reported a decreased likelihood of engaging in intercourse during an active migraine attack, but relative to the CM subgroup (8/92=9%), a higher proportion of patients in the LFEM (12/49=25%), MFEM (14/67=21%) and high frequency episodic migraine (HFEM: 6/14=43%) subgroups reported utilizing intercourse - and orgasm specifically - as a means of potentially terminating a migraine attack. In the clinic vs no-clinic groups there were no significant differences in the dependent variables assessed. Research subjects with LFEM and MFEM may report a level of libido, frequency of intercourse and likelihood of orgasm-associated intercourse that exceeds what is reported by age-matched controls free of migraine. Many patients with LFEM, MFEM and HFEM appear to utilize intercourse/orgasm as a means to potentially terminate an acute migraine attack.

Keywords: migraine, female, libido, sexual activity, phenotype

Procedia PDF Downloads 76
338 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 498
337 Is Liking for Sampled Energy-Dense Foods Mediated by Taste Phenotypes?

Authors: Gary J. Pickering, Sarah Lucas, Catherine E. Klodnicki, Nicole J. Gaudette

Abstract:

Two taste pheno types that are of interest in the study of habitual diet-related risk factors and disease are 6-n-propylthiouracil (PROP) responsiveness and thermal tasting. Individuals differ considerable in how intensely they experience the bitterness of PROP, which is partially explained by three major single nucleotide polymorphisms associated with the TAS2R38 gene. Importantly, this variable responsiveness is a useful proxy for general taste responsiveness, and links to diet-related disease risk, including body mass index, in some studies. Thermal tasting - a newly discovered taste phenotype independent of PROP responsiveness - refers to the capacity of many individuals to perceive phantom tastes in response to lingual thermal stimulation, and is linked with TRPM5 channels. Thermal tasters (TTs) also experience oral sensations more intensely than thermal non-tasters (TnTs), and this was shown to associate with differences in self-reported food preferences in a previous survey from our lab. Here we report on two related studies, where we sought to determine whether PROP responsiveness and thermal tasting would associate with perceptual differences in the oral sensations elicited by sampled energy-dense foods, and whether in turn this would influence liking. We hypothesized that hyper-tasters (thermal tasters and individuals who experience PROP intensely) would (a) rate sweet and high-fat foods more intensely than hypo-tasters, and (b) would differ from hypo-tasters in liking scores. (Liking has been proposed recently as a more accurate measure of actual food consumption). In Study 1, a range of energy-dense foods and beverages, including table cream and chocolate, was assessed by 25 TTs and 19 TnTs. Ratings of oral sensation intensity and overall liking were obtained using gVAS and gDOL scales, respectively. TTs and TnTs did not differ significantly in intensity ratings for most stimuli (ANOVA). In a 2nd study, 44 female participants sampled 22 foods and beverages, assessing them for intensity of oral sensations (gVAS) and overall liking (9-point hedonic scale). TTs (n=23) rated their overall liking of creaminess and milk products lower than did TnTs (n=21), and liked milk chocolate less. PROP responsiveness was negatively correlated with liking of food and beverages belonging to the sweet or sensory food grouping. No other differences in intensity or liking scores between hyper- and hypo-tasters were found. Taken overall, our results are somewhat unexpected, lending only modest support to the hypothesis that these taste phenotypes associate with energy-dense food liking and consumption through differences in the oral sensations they elicit. Reasons for this lack of concordance with expectations and some prior literature are discussed, and suggestions for future research are advanced.

Keywords: taste phenotypes, sensory evaluation, PROP, thermal tasting, diet-related health risk

Procedia PDF Downloads 454
336 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach

Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista

Abstract:

The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.

Keywords: depth, deep learning, geovisualisation, satellite images

Procedia PDF Downloads 5
335 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 320
334 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 88
333 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 108
332 Effects of Radiation on Mixed Convection in Power Law Fluids along Vertical Wedge Embedded in a Saturated Porous Medium under Prescribed Surface Heat Flux Condition

Authors: Qaisar Ali, Waqar A. Khan, Shafiq R. Qureshi

Abstract:

Heat transfer in Power Law Fluids across cylindrical surfaces has copious engineering applications. These applications comprises of areas such as underwater pollution, bio medical engineering, filtration systems, chemical, petroleum, polymer, food processing, recovery of geothermal energy, crude oil extraction, pharmaceutical and thermal energy storage. The quantum of research work with diversified conditions to study the effects of combined heat transfer and fluid flow across porous media has increased considerably over last few decades. The most non-Newtonian fluids of practical interest are highly viscous and therefore are often processed in the laminar flow regime. Several studies have been performed to investigate the effects of free and mixed convection in Newtonian fluids along vertical and horizontal cylinder embedded in a saturated porous medium, whereas very few analysis have been performed on Power law fluids along wedge. In this study, boundary layer analysis under the effects of radiation-mixed convection in power law fluids along vertical wedge in porous medium have been investigated using an implicit finite difference method (Keller box method). Steady, 2-D laminar flow has been considered under prescribed surface heat flux condition. Darcy, Boussinesq and Roseland approximations are assumed to be valid. Neglecting viscous dissipation effects and the radiate heat flux in the flow direction, the boundary layer equations governing mixed convection flow over a vertical wedge are transformed into dimensionless form. The single mathematical model represents the case for vertical wedge, cone and plate by introducing the geometry parameter. Both similar and Non- similar solutions have been obtained and results for Non similar case have been presented/ plotted. Effects of radiation parameter, variable heat flux parameter, wedge angle parameter ‘m’ and mixed convection parameter have been studied for both Newtonian and Non-Newtonian fluids. The results are also compared with the available data for the analysis of heat transfer in the prescribed range of parameters and found in good agreement. Results for the details of dimensionless local Nusselt number, temperature and velocity fields have also been presented for both Newtonian and Non-Newtonian fluids. Analysis of data revealed that as the radiation parameter or wedge angle is increased, the Nusselt number decreases whereas it increases with increase in the value of heat flux parameter at a given value of mixed convection parameter. Also, it is observed that as viscosity increases, the skin friction co-efficient increases which tends to reduce the velocity. Moreover, pseudo plastic fluids are more heat conductive than Newtonian and dilatant fluids respectively. All fluids behave identically in pure forced convection domain.

Keywords: porous medium, power law fluids, surface heat flux, vertical wedge

Procedia PDF Downloads 311
331 Developing a Product Circularity Index with an Emphasis on Longevity, Repairability, and Material Efficiency

Authors: Lina Psarra, Manogj Sundaresan, Purjeet Sutar

Abstract:

In response to the global imperative for sustainable solutions, this article proposes the development of a comprehensive circularity index applicable to a wide range of products across various industries. The absence of a consensus on using a universal metric to assess circularity performance presents a significant challenge in prioritizing and effectively managing sustainable initiatives. This circularity index serves as a quantitative measure to evaluate the adherence of products, processes, and systems to the principles of a circular economy. Unlike traditional distinct metrics such as recycling rates or material efficiency, this index considers the entire lifecycle of a product in one single metric, also incorporating additional factors such as reusability, scarcity of materials, reparability, and recyclability. Through a systematic approach and by reviewing existing metrics and past methodologies, this work aims to address this gap by formulating a circularity index that can be applied to diverse product portfolio and assist in comparing the circularity of products on a scale of 0%-100%. Project objectives include developing a formula, designing and implementing a pilot tool based on the developed Product Circularity Index (PCI), evaluating the effectiveness of the formula and tool using real product data, and assessing the feasibility of integration into various sustainability initiatives. The research methodology involves an iterative process of comprehensive research, analysis, and refinement where key steps include defining circularity parameters, collecting relevant product data, applying the developed formula, and testing the tool in a pilot phase to gather insights and make necessary adjustments. Major findings of the study indicate that the PCI provides a robust framework for evaluating product circularity across various dimensions. The Excel-based pilot tool demonstrated high accuracy and reliability in measuring circularity, and the database proved instrumental in supporting comprehensive assessments. The PCI facilitated the identification of key areas for improvement, enabling more informed decision-making towards circularity and benchmarking across different products, essentially assisting towards better resource management. In conclusion, the development of the Product Circularity Index represents a significant advancement in global sustainability efforts. By providing a standardized metric, the PCI empowers companies and stakeholders to systematically assess product circularity, track progress, identify improvement areas, and make informed decisions about resource management. This project contributes to the broader discourse on sustainable development by offering a practical approach to enhance circularity within industrial systems, thus paving the way towards a more resilient and sustainable future.

Keywords: circular economy, circular metrics, circularity assessment, circularity tool, sustainable product design, product circularity index

Procedia PDF Downloads 27
330 Remote Radiation Mapping Based on UAV Formation

Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov

Abstract:

High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.

Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation

Procedia PDF Downloads 97
329 The Potential Impact of Big Data Analytics on Pharmaceutical Supply Chain Management

Authors: Maryam Ziaee, Himanshu Shee, Amrik Sohal

Abstract:

Big Data Analytics (BDA) in supply chain management has recently drawn the attention of academics and practitioners. Big data refers to a massive amount of data from different sources, in different formats, generated at high speed through transactions in business environments and supply chain networks. Traditional statistical tools and techniques find it difficult to analyse this massive data. BDA can assist organisations to capture, store, and analyse data specifically in the field of supply chain. Currently, there is a paucity of research on BDA in the pharmaceutical supply chain context. In this research, the Australian pharmaceutical supply chain was selected as the case study. This industry is highly significant since the right medicine must reach the right patients, at the right time, in right quantity, in good condition, and at the right price to save lives. However, drug shortages remain a substantial problem for hospitals across Australia with implications on patient care, staff resourcing, and expenditure. Furthermore, a massive volume and variety of data is generated at fast speed from multiple sources in pharmaceutical supply chain, which needs to be captured and analysed to benefit operational decisions at every stage of supply chain processes. As the pharmaceutical industry lags behind other industries in using BDA, it raises the question of whether the use of BDA can improve transparency among pharmaceutical supply chain by enabling the partners to make informed-decisions across their operational activities. This presentation explores the impacts of BDA on supply chain management. An exploratory qualitative approach was adopted to analyse data collected through interviews. This study also explores the BDA potential in the whole pharmaceutical supply chain rather than focusing on a single entity. Twenty semi-structured interviews were undertaken with top managers in fifteen organisations (five pharmaceutical manufacturers, five wholesalers/distributors, and five public hospital pharmacies) to investigate their views on the use of BDA. The findings revealed that BDA can enable pharmaceutical entities to have improved visibility over the whole supply chain and also the market; it enables entities, especially manufacturers, to monitor consumption and the demand rate in real-time and make accurate demand forecasts which reduce drug shortages. Timely and precise decision-making can allow the entities to source and manage their stocks more effectively. This can likely address the drug demand at hospitals and respond to unanticipated issues such as drug shortages. Earlier studies explore BDA in the context of clinical healthcare; however, this presentation investigates the benefits of BDA in the Australian pharmaceutical supply chain. Furthermore, this research enhances managers’ insight into the potentials of BDA at every stage of supply chain processes and helps to improve decision-making in their supply chain operations. The findings will turn the rhetoric of data-driven decision into a reality where the managers may opt for analytics for improved decision-making in the supply chain processes.

Keywords: big data analytics, data-driven decision, pharmaceutical industry, supply chain management

Procedia PDF Downloads 106
328 Harnessing Emerging Creative Technology for Knowledge Discovery of Multiwavelenght Datasets

Authors: Basiru Amuneni

Abstract:

Astronomy is one domain with a rise in data. Traditional tools for data management have been employed in the quest for knowledge discovery. However, these traditional tools become limited in the face of big. One means of maximizing knowledge discovery for big data is the use of scientific visualisation. The aim of the work is to explore the possibilities offered by emerging creative technologies of Virtual Reality (VR) systems and game engines to visualize multiwavelength datasets. Game Engines are primarily used for developing video games, however their advanced graphics could be exploited for scientific visualization which provides a means to graphically illustrate scientific data to ease human comprehension. Modern astronomy is now in the era of multiwavelength data where a single galaxy for example, is captured by the telescope several times and at different electromagnetic wavelength to have a more comprehensive picture of the physical characteristics of the galaxy. Visualising this in an immersive environment would be more intuitive and natural for an observer. This work presents a standalone VR application that accesses galaxy FITS files. The application was built using the Unity Game Engine for the graphics underpinning and the OpenXR API for the VR infrastructure. The work used a methodology known as Design Science Research (DSR) which entails the act of ‘using design as a research method or technique’. The key stages of the galaxy modelling pipeline are FITS data preparation, Galaxy Modelling, Unity 3D Visualisation and VR Display. The FITS data format cannot be read by the Unity Game Engine directly. A DLL (CSHARPFITS) which provides a native support for reading and writing FITS files was used. The Galaxy modeller uses an approach that integrates cleaned FITS image pixels into the graphics pipeline of the Unity3d game Engine. The cleaned FITS images are then input to the galaxy modeller pipeline phase, which has a pre-processing script that extracts, pixel, galaxy world position, and colour maps the FITS image pixels. The user can visualise image galaxies in different light bands, control the blend of the image with similar images from different sources or fuse images for a holistic view. The framework will allow users to build tools to realise complex workflows for public outreach and possibly scientific work with increased scalability, near real time interactivity with ease of access. The application is presented in an immersive environment and can use all commercially available headset built on the OpenXR API. The user can select galaxies in the scene, teleport to the galaxy, pan, zoom in/out, and change colour gradients of the galaxy. The findings and design lessons learnt in the implementation of different use cases will contribute to the development and design of game-based visualisation tools in immersive environment by enabling informed decisions to be made.

Keywords: astronomy, visualisation, multiwavelenght dataset, virtual reality

Procedia PDF Downloads 88
327 Safety and Efficacy of RM-001, Autologous HBG1/2 Promoter-Modified CD34+Hematopoietic Stem and Progenitor Cells, in Transfusion-Dependent β-Thalassemia

Authors: Rongrong Liu, Li Wang, Hui Xu, Jianpei Fang, Sixi Liu, Xiaolin Yin, Junbin Liang, Gaohui Yan, Yaoyun Li, Yali Zhou, Xinyu Li, Yue Li, Lei Shi, Yongrong Lai, Junjiu Huang, Xinhua Zhang

Abstract:

Background: Beta-Thalassemia is caused by reduced (β+) or absent (β0) synthesis of the β-globin chains of hemoglobin. Transfusions and oral iron chelation therapy have improved the quality of life for patients with Transfusion-Dependent thalassemia (TDT). Recent advances in genome editing platforms of CRISPR-Cas9 have paved the way for induction of HbF by reactivating expression of γ-chain.Aims: We performed CRISPR-Cas9-mediated genome editing of hematopoietic stem cells to mutate HBG1/HBG2 promoter sequence, thereby representing a naturally occurring HPFH-liked mutation, producing RM-001. Here, we present an initial assessment of safety and efficacy of RM-001 in patients with TDT. Methods: Patients (6–35 y of age) with TDT receiving packed red blood cell (pRBC) transfusions of ≥100 mL/kg/y or ≥10 units/y in the previous 2 y were eligible. CD34+ cells were edited with CRISPR-Cas9 using a guide RNA specific for the binding site of BCL11A on the HBG1/2 promoter. Prior to RM-001 product infusion (day 0), patients received myeloablative conditioning with Busulfan from day-7 to day-4. Patients were monitored for AEs Hb expression.Results: Data cut as of 28 Feb 2024, 16 TDT patients have been treated with RM-001 and followed ≥3 months. 5 of these 16 patients had finished their 24 months follow up. Eleven patients have β0/β0 genotype and five patients have β0/β+ genotype. In addition to β-thalassemia, two patients had α- deletion with the genotype of --/αα. Efficacy:All patients received a single dose intravenous infusion of RM-001 cells. 5 of them had been followed 24 months or longer. All patients achieved transfusion-independent (TI, total Hb continued ≥ 9g/dL) (Figure1). Patients demonstrated sustained and clinically meaningful increases in HbF levels since 4 month post-RM-001 infusion (Figure.2). Total hemoglobin in all patients was stable at 10-12g/dL during the follow-up period. Safety:The adverse events observed after RM-001 infusion were consistent with those that are typical of Busulfan-based myeloablation. The allelic editing analysis at 6-month visit showed that the on-target allelic editing frequency in bone marrow cells was 73.44% (64.65% to 84.6%, n=13).Summary/Conclusion: This interim analysis, in which all the 19 patients age from 7.9 to 25yo met the success criteria for the trial with respect to transfusion independence, showed that autologous HBG1/2 promoter-modified CD34+ HSPCs gene therapy resulted in an adequate amount of HbF as early as 2 months after infusion led to near-normal hemoglobin levels, remained transfusion-free through the reported period without product related SAE. After RM-001 infusion, high levels of HbF proportion and on-target editing in bone marrow cells were maintained. Submitted on behalf of the RM-001 Investigators.

Keywords: thalassemian, genetherapy, CRISPR/Cas9, HbF

Procedia PDF Downloads 17
326 Gauging Floral Resources for Pollinators Using High Resolution Drone Imagery

Authors: Nicholas Anderson, Steven Petersen, Tom Bates, Val Anderson

Abstract:

Under the multiple-use management regime established in the United States for federally owned lands, government agencies have come under pressure from commercial apiaries to grant permits for the summer pasturing of honeybees on government lands. Federal agencies have struggled to integrate honeybees into their management plans and have little information to make regulations that resolve how many colonies should be allowed in a single location and at what distance sets of hives should be placed. Many conservation groups have voiced their concerns regarding the introduction of honeybees to these natural lands, as they may outcompete and displace native pollinating species. Assessing the quality of an area in regard to its floral resources, pollen, and nectar can be important when attempting to create regulations for the integration of commercial honeybee operations into a native ecosystem. Areas with greater floral resources may be able to support larger numbers of honeybee colonies, while poorer resource areas may be less resilient to introduced disturbances. Attempts are made in this study to determine flower cover using high resolution drone imagery to help assess the floral resource availability to pollinators in high elevation, tall forb communities. This knowledge will help in determining the potential that different areas may have for honeybee pasturing and honey production. Roughly 700 images were captured at 23m above ground level using a drone equipped with a Sony QX1 RGB 20-megapixel camera. These images were stitched together using Pix4D, resulting in a 60m diameter high-resolution mosaic of a tall forb meadow. Using the program ENVI, a supervised maximum likelihood classification was conducted to calculate the percentage of total flower cover and flower cover by color (blue, white, and yellow). A complete vegetation inventory was taken on site, and the major flowers contributing to each color class were noted. An accuracy assessment was performed on the classification yielding an 89% overall accuracy and a Kappa Statistic of 0.855. With this level of accuracy, drones provide an affordable and time efficient method for the assessment of floral cover in large areas. The proximal step of this project will now be to determine the average pollen and nectar loads carried by each flower species. The addition of this knowledge will result in a quantifiable method of measuring pollen and nectar resources of entire landscapes. This information will not only help land managers determine stocking rates for honeybees on public lands but also has applications in the agricultural setting, aiding producers in the determination of the number of honeybee colonies necessary for proper pollination of fruit and nut crops.

Keywords: honeybee, flower, pollinator, remote sensing

Procedia PDF Downloads 140
325 Association between Obstetric Factors with Affected Areas of Health-Related Quality of Life of Pregnant Women

Authors: Cinthia G. P. Calou, Franz J. Antezana, Ana I. O. Nicolau, Eveliny S. Martins, Paula R. A. L. Soares, Glauberto S. Quirino, Dayanne R. Oliveira, Priscila S. Aquino, Régia C. M. B. Castro, Ana K. B. Pinheiro

Abstract:

Introduction: As an integral part of the health-disease process, gestation is a period in which the social insertion of women can influence, in a positive or negative way, the course of the pregnancy-puerperal cycle. Thus, evaluating the quality of life of this population can redirect the implementation of innovative practices in the quest to make them more effective and real for the promotion of a more humanized care. This study explores the associations between the obstetric factors with affected areas of health-related quality of life of pregnant women with habitual risk. Methods: This is a cross-sectional, quantitative study conducted in three public facilities and a private service that provides prenatal care in the city of Fortaleza, Ceara, Brazil. The sample consisted of 261 pregnant women who underwent low-risk prenatal care and were interviewed from September to November 2014. The collection instruments were a questionnaire containing socio-demographic and obstetric variables, in addition to the Brazilian version of the Mother scale Generated Index (MGI) characterized by being a specific and objective instrument, consisting of a single sheet and subdivided into three stages. It allows identifying the areas of life of the pregnant woman that are most affected, which could go unnoticed by the pre-formulated measurement instruments. The obstetric data, as well as the data concerning the application of the MGI scale, were compiled and analyzed through the statistical program Statistical Package for the Social Sciences (SPSS), version 20.0. After the compilation, a descriptive analysis was carried out. Then, associations were made between some variables. The tests applied were the Pearson Chi-Square and the Fisher's exact test. The odds ratio was also calculated. These associations were considered statistically significant when the p (probability) value was less than or equal to a level of 5% (α = 0.05) in the tests performed. Results: The variables that negatively reflected the quality of life of the pregnant women and presented a significant association with the polaciuria were: gestational age (p = 0.022) and parity (p = 0.048). Episodes of nausea and vomiting also showed significant with gestational age correlation (p = 0.0001). Evaluating the crossing of stress, we observed a significant association with parity (p = 0.0001). In turn, emotional lability revealed dependence on the variable type of delivery (p = 0.009). Conclusion: The health professionals involved in the assistance to the pregnant woman can understand how the process of gestation is experienced, considering all its peculiar transformations; to meet their individual needs, stimulating their autonomy and their power of choice, envisaging the achievement of a better quality of life related to health in the perspective of health promotion.

Keywords: health-related quality of life, obstetric nursing, pregnant women, prenatal care

Procedia PDF Downloads 293
324 Examination of Indoor Air Quality of Naturally Ventilated Dwellings During Winters in Mega-City Kolkata

Authors: Tanya Kaur Bedi, Shankha Pratim Bhattacharya

Abstract:

The US Environmental Protection Agency defines indoor air quality as “The air quality within and around buildings, especially as it relates to the health and comfort of building occupants”. According to the 2021 report by the Energy Policy Institute at Chicago, Indian residents, a country which is home to the highest levels of air pollution in the world, lose about 5.9 years from life expectancy due to poor air quality and yet has numerous dwellings dependent on natural ventilation. Currently the urban population spends 90% of the time indoors, this scenario raises a concern for occupant health and well-being. The built environment can affect health directly and indirectly through immediate or long-term exposure to indoor air pollutants. Health effects associated with indoor air pollutants include eye/nose/throat irritation, respiratory diseases, heart disease, and even cancer. This study attempts to demonstrate the causal relationship between the indoor air quality and its determining aspects. Detailed indoor air quality audits were conducted in residential buildings located in Kolkata, India in the months of December and January 2021. According to the air pollution knowledge assessment city program in India, Kolkata is also the second most polluted mega-city after Delhi. Although the air pollution levels are alarming year-long, the winter months are most crucial due to the unfavorable environmental conditions. While emissions remain typically constant throughout the year, cold air is denser and moves slower than warm air, trapping the pollution in place for much longer and consequently is breathed in at a higher rate than the summers. The air pollution monitoring period was selected considering environmental factors and major pollution contributors like traffic and road dust. This study focuses on the relationship between the built environment and the spatial-temporal distribution of air pollutants in and around it. The measured parameters include, temperature, relative humidity, air velocity, particulate matter, volatile organic compounds, formaldehyde, and benzene. A total of 56 rooms were audited, selectively targeting the most dominant middle-income group. The data-collection was conducted using a set of instruments positioned in the human breathing-zone. The study assesses indoor air quality based on factors determining natural ventilation and air pollution dispersion such as surrounding environment, dominant wind, openable window to floor area ratio, windward or leeward side openings, and natural ventilation type in the room: single side or cross-ventilation, floor height, residents cleaning habits, etc.

Keywords: indoor air quality, occupant health, urban housing, air pollution, natural ventilation, architecture, urban issues

Procedia PDF Downloads 120