Search results for: David Smith
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 881

Search results for: David Smith

191 Evaluation and Proposal for Improvement of the Flow Measurement Equipment in the Bellavista Drinking Water System of the City of Azogues

Authors: David Quevedo, Diana Coronel

Abstract:

The present article carries out an evaluation of the drinking water system in the Bellavista sector of the city of Azogues, with the purpose of determining the appropriate equipment to record the actual consumption flows of the inhabitants in said sector. Taking into account that the study area is located in a rural and economically disadvantaged area, there is an urgent need to establish a control system for the consumption of drinking water in order to conserve and manage the vital resource in the best possible way, considering that the water source supplying this sector is approximately 9km away. The research began with the collection of cartographic, demographic, and statistical data of the sector, determining the coverage area, population projection, and a provision that guarantees the supply of drinking water to meet the water needs of the sector's inhabitants. By using hydraulic modeling through the United States Environmental Protection Agency Application for Modeling Drinking Water Distribution Systems EPANET 2.0 software, theoretical hydraulic data were obtained, which were used to design and justify the most suitable measuring equipment for the Bellavista drinking water system. Taking into account a minimum service life of the drinking water system of 30 years, future flow rates were calculated for the design of the macro-measuring device. After analyzing the network, it was evident that the Bellavista sector has an average consumption of 102.87 liters per person per day, but considering that Ecuadorian regulations recommend a provision of 180 liters per person per day for the geographical conditions of the sector, this value was used for the analysis. With all the collected and calculated information, the conclusion was reached that the Bellavista drinking water system needs to have a 125mm electromagnetic macro-measuring device for the first three quinquenniums of its service life and a 150mm diameter device for the following three quinquenniums. The importance of having equipment that provides real and reliable data will allow for the control of water consumption by the population of the sector, measured through micro-measuring devices installed at the entrance of each household, which should match the readings of the macro-measuring device placed after the water storage tank outlet, in order to control losses that may occur due to leaks in the drinking water system or illegal connections.

Keywords: macrometer, hydraulics, endowment, water

Procedia PDF Downloads 48
190 Delineation of Green Infrastructure Buffer Areas with a Simulated Annealing: Consideration of Ecosystem Services Trade-Offs in the Objective Function

Authors: Andres Manuel Garcia Lamparte, Rocio Losada Iglesias, Marcos BoullóN Magan, David Miranda Barros

Abstract:

The biodiversity strategy of the European Union for 2030, mentions climate change as one of the key factors for biodiversity loss and considers green infrastructure as one of the solutions to this problem. In this line, the European Commission has developed a green infrastructure strategy which commits members states to consider green infrastructure in their territorial planning. This green infrastructure is aimed at granting the provision of a wide number of ecosystem services to support biodiversity and human well-being by countering the effects of climate change. Yet, there are not too many tools available to delimit green infrastructure. The available ones consider the potential of the territory to provide ecosystem services. However, these methods usually aggregate several maps of ecosystem services potential without considering possible trade-offs. This can lead to excluding areas with a high potential for providing ecosystem services which have many trade-offs with other ecosystem services. In order to tackle this problem, a methodology is proposed to consider ecosystem services trade-offs in the objective function of a simulated annealing algorithm aimed at delimiting green infrastructure multifunctional buffer areas. To this end, the provision potential maps of the regulating ecosystem services considered to delimit the multifunctional buffer areas are clustered in groups, so that ecosystem services that create trade-offs are excluded in each group. The normalized provision potential maps of the ecosystem services in each group are added to obtain a potential map per group which is normalized again. Then the potential maps for each group are combined in a raster map that shows the highest provision potential value in each cell. The combined map is then used in the objective function of the simulated annealing algorithm. The algorithm is run both using the proposed methodology and considering the ecosystem services individually. The results are analyzed with spatial statistics and landscape metrics to check the number of ecosystem services that the delimited areas produce, as well as their regularity and compactness. It has been observed that the proposed methodology increases the number of ecosystem services produced by delimited areas, improving their multifunctionality and increasing their effectiveness in preventing climate change impacts.

Keywords: ecosystem services trade-offs, green infrastructure delineation, multifunctional buffer areas, climate change

Procedia PDF Downloads 143
189 Controlled Digital Lending, Equitable Access to Knowledge and Future Library Services

Authors: Xuan Pang, Alvin L. Lee, Peggy Glatthaar

Abstract:

Libraries across the world have been an innovation engine of creativity and opportunityin many decades. The on-going global epidemiology outbreak and health crisis experience illuminates potential reforms, rethinking beyond traditional library operations and services. Controlled Digital Lending (CDL) is one of the emerging technologies libraries used to deliver information digitally in support of online learning and teachingand make educational materials more affordable and more accessible. CDL became a popular term in the United States of America (USA) as a result of a white paper authored by Kyle K. Courtney (Harvard University) and David Hansen (Duke University). The paper gave the legal groundwork to explore CDL: Fair Use, First Sale Doctrine, and Supreme Court rulings. Library professionals implemented this new technology to fulfill their users’ needs. Three libraries in the state of Florida (University of Florida, Florida Gulf Coast University, and Florida A&M University) started a conversation about how to develop strategies to make CDL work possible at each institution. This paper shares the stories of piloting and initiating a CDL program to ensure students have reliable, affordable access to course materials they need to be successful. Additionally, this paper offers an overview of the emerging trends of Controlled Digital Lending in the USA and demonstrates the development of the CDL platforms, policies, and implementation plans. The paper further discusses challenges and lessons learned and how each institution plans to sustain the program into future library services. The fundamental mission of the library is providing users unrestricted access to library resources regardless of their physical location, disability, health status, or other circumstances. The professional due diligence of librarians, as information professionals, is to makeeducational resources more affordable and accessible.CDL opens a new frontier of library services as a mechanism for library practice to enhance user’s experience of using libraries’ services. Libraries should consider exploring this tool to distribute library resources in an effective and equitable way. This new methodology has potential benefits to libraries and end users.

Keywords: controlled digital lending, emerging technologies, equitable access, collaborations

Procedia PDF Downloads 109
188 Solar Photovoltaic Driven Air-Conditioning for Commercial Buildings: A Case of Botswana

Authors: Taboka Motlhabane, Pradeep Sahoo

Abstract:

The global demand for cooling has grown exponentially over the past century to meet economic development and social needs, accounting for approximately 10% of the global electricity consumption. As global temperatures continue to rise, the demand for cooling and heating, ventilation and air-conditioning (HVAC) equipment is set to rise with it. The increased use of HVAC equipment has significantly contributed to the growth of greenhouse gas (GHG) emissions which aid the climate crisis- one of the biggest challenges faced by the current generation. The need to address emissions caused directly by HVAC equipment and electricity generated to meet the cooling or heating demand is ever more pressing. Currently, developed countries account for the largest cooling and heating demand, however developing countries are anticipated to experience a huge increase in population growth in 10 years, resulting in a shift in energy demand. Developing countries, which are projected to account for nearly 60% of the world's GDP by 2030, are rapidly building infrastructure and economies to meet their growing needs and meet these projections. Cooling, a very energy-intensive process that can account for 20 % to 75% of a building's energy, depending on the building's use. Solar photovoltaic (PV) driven air-conditioning offers a great cost-effective alternative for adoption in both residential and non-residential buildings to offset grid electricity, particularly in countries with high irradiation, such as Botswana. This research paper explores the potential of a grid-connected solar photovoltaic vapor-compression air-conditioning system for the Peter-Smith herbarium at the Okavango Research Institute (ORI) University of Botswana campus in Maun, Botswana. The herbarium plays a critical role in the collection and preservation of botanical data, dating back over 100 years, with pristine collection from the Okavango Delta, a UNESCO world heritage site and serves as a reference and research site. Due to the herbarium’s specific needs, it operates throughout the day and year in an attempt to maintain a constant herbarium temperature of 16°?. The herbarium model studied simulates a variable-air-volume HVAC system with a system rating of 30 kW. Simulation results show that the HVAC system accounts for 68.9% of the building's total electricity at 296 509.60 kWh annually. To offset the grid electricity, a 175.1 kWp nominal power rated PV system requiring 416 modules to match the required power, covering an area of 928 m2 is used to meet the HVAC system annual needs. An economic assessment using PVsyst found that for an installation priced with average solar PV prices in Botswana totalled to be 787 090.00 BWP, with annual operating costs of 30 500 BWP/year. With self-project financing, the project is estimated to have recouped its initial investment within 6.7 years. At an estimated project lifetime of 20 years, the Net Present Value is projected at 1 565 687.00 BWP with a ROI of 198.9%, with 74 070.67 tons of CO2 saved at the end of the project lifetime. This study investigates the performance of the HVAC system to meet the indoor air comfort requirements, the annual PV system performance, and the building model has been simulated using DesignBuilder Software.

Keywords: vapor compression refrigeration, solar cooling, renewable energy, herbarium

Procedia PDF Downloads 107
187 Microscale observations of a gas cell wall rupture in bread dough during baking and confrontation to 2/3D Finite Element simulations of stress concentration

Authors: Kossigan Bernard Dedey, David Grenier, Tiphaine Lucas

Abstract:

Bread dough is often described as a dispersion of gas cells in a continuous gluten/starch matrix. The final bread crumb structure is strongly related to gas cell walls (GCWs) rupture during baking. At the end of proofing and during baking, part of the thinnest GCWs between expanding gas cells is reduced to a gluten film of about the size of a starch granule. When such size is reached gluten and starch granules must be considered as interacting phases in order to account for heterogeneities and appropriately describe GCW rupture. Among experimental investigations carried out to assess GCW rupture, no experimental work was performed to observe the GCW rupture in the baking conditions at GCW scale. In addition, attempts to numerically understand GCW rupture are usually not performed at the GCW scale and often considered GCWs as continuous. The most relevant paper that accounted for heterogeneities dealt with the gluten/starch interactions and their impact on the mechanical behavior of dough film. However, stress concentration in GCW was not discussed. In this study, both experimental and numerical approaches were used to better understand GCW rupture in bread dough during baking. Experimentally, a macro-scope placed in front of a two-chamber device was used to observe the rupture of a real GCW of 200 micrometers in thickness. Special attention was paid in order to mimic baking conditions as far as possible (temperature, gas pressure and moisture). Various differences in pressure between both sides of GCW were applied and different modes of fracture initiation and propagation in GCWs were observed. Numerically, the impact of gluten/starch interactions (cohesion or non-cohesion) and rheological moduli ratio on the mechanical behavior of GCW under unidirectional extension was assessed in 2D/3D. A non-linear viscoelastic and hyperelastic approach was performed to match the finite strain involved in GCW during baking. Stress concentration within GCW was identified. Simulated stresses concentration was discussed at the light of GCW failure observed in the device. The gluten/starch granule interactions and rheological modulus ratio were found to have a great effect on the amount of stress possibly reached in the GCW.

Keywords: dough, experimental, numerical, rupture

Procedia PDF Downloads 103
186 Saving the Decolonized Subject from Neglected Tropical Diseases: Public Health Campaign and Household-Centred Sanitation in Colonial West Africa, 1900-1960

Authors: Adebisi David Alade

Abstract:

In pre-colonial West Africa, the deadliness of the climate vis-a- vis malaria and other tropical diseases to Europeans turned the region into the “white man’s grave.” Thus, immediately after the partition of Africa in 1885, civilisatrice and mise en valeur not only became a pretext for the establishment of colonial rule; from a medical point of view, the control and possible eradication of disease in the continent emerged as one of the first concerns of the European colonizers. Though geared toward making Africa exploitable, historical evidence suggests that some colonial Water, Sanitation and Hygiene (WASH) policies and projects reduced certain tropical diseases in some West African communities. Exploring some of these disease control interventions by way of historical revisionism, this paper challenges the orthodox interpretation of colonial sanitation and public health measures in West Africa. This paper critiques the deployment of race and class as analytical tools for the study of colonial WASH projects, an exercise which often reduces the complexity and ambiguity of colonialism to the binary of colonizer and the colonized. Since West Africa presently ranks high among regions with Neglected Tropical Diseases (NTDs), it is imperative to decentre colonial racism and economic exploitation in African history in order to give room for Africans to see themselves in other ways. Far from resolving the problem of NTDs by fiat in the region, this study seeks to highlight important blind spots in African colonial history in an attempt to prevent post-colonial African leaders from throwing away the baby with the bath water. As scholars researching colonial sanitation and public health in the continent rarely examine its complex meaning and content, this paper submits that the outright demonization of colonial rule across space and time continues to build ideological wall between the present and the past which not only inhibit fruitful borrowing from colonial administration of West Africa, but also prevents a wide understanding of the challenges of WASH policies and projects in most West African states.

Keywords: colonial rule, disease control, neglected tropical diseases, WASH

Procedia PDF Downloads 160
185 Re-Examining the Distinction between Odour Nuisance and Health Impact: A Community’s Campaign against Landfill Gas Exposure in Shongweni, South Africa

Authors: Colin David La Grange, Lisa Frost Ramsay

Abstract:

Hydrogen sulphide (H2S) is a minor component of landfill gas, but significant in its distinct odorous quality and its association with landfill-related community complaints. The World Health Organisation (WHO) provides two guidelines for H2S: a health guideline at 150 µg/m3 on a 24-hour average, and a nuisance guideline at 7 µg/m3 on a 30-minute average. Albeit a practical distinction for impact assessment, this paper highlights the danger of the apparent dualism between nuisance and health impact, particularly when it is used to dismiss community concerns of perceived health impacts at low concentrations of H2S, as in the case of a community battle against the impacts of a landfill in Shongweni, KwaZulu-Natal, South Africa. Here community members reported, using a community developed mobile phone application, a range of health symptoms that coincided with, or occurred subsequent to, odour events and localised H2S peaks. Local doctors also documented increased visits for symptoms of respiratory distress, eye and skin irritation, and stress after such odour events. Objectively measured H2S and other pollutant concentrations during these events, however, remained below WHO health guidelines. This case study highlights the importance of the physiological link between the experience of environmental nuisance and overall health and wellbeing, showing these to be less distinct than the WHO guidelines would suggest. The potential mechanisms of impact of an odorous plume, with key constituents at concentrations below traditional health thresholds, on psychologically and/or physiologically sensitised individuals are described. In the case of psychological sensitisation, previously documented mechanisms such as aversive conditioning and odour-triggered panic are relevant. Physiological sensitisation to environmental pollutants, evident as a seemingly disproportionate physical (allergy-type) response to either low concentrations or a short duration exposure of a toxin or toxins, remains extensively examined but still not well understood. The links between a heightened sensitivity to toxic compounds, accumulation of some compounds in the body, and a pre-existing or associated immunological stress disorder are presented as a possible explanation.

Keywords: immunological stress disorder, landfill odour, odour nuisance, odour sensitisation, toxin accumulation

Procedia PDF Downloads 101
184 The Routine Use of a Negative Pressure Incision Management System in Vascular Surgery: A Case Series

Authors: Hansraj Bookun, Angela Tan, Rachel Xuan, Linheng Zhao, Kejia Wang, Animesh Singla, David Kim, Christopher Loupos

Abstract:

Introduction: Incisional wound complications in vascular surgery patients represent a significant clinical and econometric burden of morbidity and mortality. The objective of this study was to trial the feasibility of applying the Prevena negative pressure incision management system as a routine dressing in patients who had undergone arterial surgery. Conventionally, Prevena has been applied to groin incisions, but this study features applications on multiple wound sites such as the thigh or major amputation stumps. Method: This was a cross-sectional observational, single-centre case series of 12 patients who had undergone major vascular surgery. Their wounds were managed with the Prevena system being applied either intra-operatively or on the first post-operative day. Demographic and operative details were collated as well as the length of stay and complication rates. Results: There were 9 males (75%) with mean age of 66 years and the comorbid burden was as follows: ischaemic heart disease (92%), diabetes (42%), hypertension (100%), stage 4 or greater kidney impairment (17%) and current or ex-smoking (83%). The main indications were acute ischaemia (33%), claudication (25%), and gangrene (17%). There were single instances of an occluded popliteal artery aneurysm, diabetic foot infection, and rest pain. The majority of patients (50%) had hybrid operations with iliofemoral endarterectomies, patch arterioplasties, and further peripheral endovascular treatment. There were 4 complex arterial bypass operations and 2 major amputations. The mean length of stay was 17 ± 10 days, with a range of 4 to 35 days. A single complication, in the form of a lymphocoele, was encountered in the context of an iliofemoral endarterectomy and patch arterioplasty. This was managed conservatively. There were no deaths. Discussion: The Prevena wound management system shows that in conjunction with safe vascular surgery, absolute wound complication rates remain low and that it remains a valuable adjunct in the treatment of vasculopaths.

Keywords: wound care, negative pressure, vascular surgery, closed incision

Procedia PDF Downloads 105
183 Transcriptional Differences in B cell Subpopulations over the Course of Preclinical Autoimmunity Development

Authors: Aleksandra Bylinska, Samantha Slight-Webb, Kevin Thomas, Miles Smith, Susan Macwana, Nicolas Dominguez, Eliza Chakravarty, Joan T. Merrill, Judith A. James, Joel M. Guthridge

Abstract:

Background: Systemic Lupus Erythematosus (SLE) is an interferon-related autoimmune disease characterized by B cell dysfunction. One of the main hallmarks is a loss of tolerance to self-antigens leading to increased levels of autoantibodies against nuclear components (ANAs). However, up to 20% of healthy ANA+ individuals will not develop clinical illness. SLE is more prevalent among women and minority populations (African, Asian American and Hispanics). Moreover, African Americans have a stronger interferon (IFN) signature and develop more severe symptoms. The exact mechanisms involved in ethnicity-dependent B cell dysregulation and the progression of autoimmune disease from ANA+ healthy individuals to clinical disease remains unclear. Methods: Peripheral blood mononuclear cells (PBMCs) from African (AA) and European American (EA) ANA- (n=12), ANA+ (n=12) and SLE (n=12) individuals were assessed by multimodal scRNA-Seq/CITE-Seq methods to examine differential gene signatures in specific B cell subsets. Library preparation was done with a 10X Genomics Chromium according to established protocols and sequenced on Illumina NextSeq. The data were further analyzed for distinct cluster identification and differential gene signatures in the Seurat package in R and pathways analysis was performed using Ingenuity Pathways Analysis (IPA). Results: Comparing all subjects, 14 distinct B cell clusters were identified using a community detection algorithm and visualized with Uniform Manifold Approximation Projection (UMAP). The proportion of each of those clusters varied by disease status and ethnicity. Transitional B cells trended higher in ANA+ healthy individuals, especially in AA. Ribonucleoprotein high population (HNRNPH1 elevated, heterogeneous nuclear ribonucleoprotein, RNP-Hi) of proliferating Naïve B cells were more prevalent in SLE patients, specifically in EA. Interferon-induced protein high population (IFIT-Hi) of Naive B cells are increased in EA ANA- individuals. The proportion of memory B cells and plasma cells clusters tend to be expanded in SLE patients. As anticipated, we observed a higher signature of cytokine-related pathways, especially interferon, in SLE individuals. Pathway analysis among AA individuals revealed an NRF2-mediated Oxidative Stress response signature in the transitional B cell cluster, not seen in EA individuals. TNFR1/2 and Sirtuin Signaling pathway genes were higher in AA IFIT-Hi Naive B cells, whereas they were not detected in EA individuals. Interferon signaling was observed in B cells in both ethnicities. Oxidative phosphorylation was found in age-related B cells (ABCs) for both ethnicities, whereas Death Receptor Signaling was found only in EA patients in these cells. Interferon-related transcription factors were elevated in ABCs and IFIT-Hi Naive B cells in SLE subjects of both ethnicities. Conclusions: ANA+ healthy individuals have altered gene expression pathways in B cells that might drive apoptosis and subsequent clinical autoimmune pathogenesis. Increases in certain regulatory pathways may delay progression to SLE. Further, AA individuals have more elevated activation pathways that may make them more susceptible to SLE.

Keywords:

Procedia PDF Downloads 153
182 Exploring the Neural Mechanisms of Communication and Cooperation in Children and Adults

Authors: Sara Mosteller, Larissa K. Samuelson, Sobanawartiny Wijeakumar, John P. Spencer

Abstract:

This study was designed to examine how humans are able to teach and learn semantic information as well as cooperate in order to jointly achieve sophisticated goals. Specifically, we are measuring individual differences in how these abilities develop from foundational building blocks in early childhood. The current study adopts a paradigm for novel noun learning developed by Samuelson, Smith, Perry, and Spencer (2011) to a hyperscanning paradigm [Cui, Bryant and Reiss, 2012]. This project measures coordinated brain activity between a parent and child using simultaneous functional near infrared spectroscopy (fNIRS) in pairs of 2.5, 3.5 and 4.5-year-old children and their parents. We are also separately testing pairs of adult friends. Children and parents, or adult friends, are seated across from one another at a table. The parent (in the developmental study) then teaches their child the names of novel toys. An experimenter then tests the child by presenting the objects in pairs and asking the child to retrieve one object by name. Children are asked to choose from both pairs of familiar objects and pairs of novel objects. In order to explore individual differences in cooperation with the same participants, each dyad plays a cooperative game of Jenga, in which their joint score is based on how many blocks they can remove from the tower as a team. A preliminary analysis of the noun-learning task showed that, when presented with 6 word-object mappings, children learned an average of 3 new words (50%) and that the number of objects learned by each child ranged from 2-4. Adults initially learned all of the new words but were variable in their later retention of the mappings, which ranged from 50-100%. We are currently examining differences in cooperative behavior during the Jenga playing game, including time spent discussing each move before it is made. Ongoing analyses are examining the social dynamics that might underlie the differences between words that were successfully learned and unlearned words for each dyad, as well as the developmental differences observed in the study. Additionally, the Jenga game is being used to better understand individual and developmental differences in social coordination during a cooperative task. At a behavioral level, the analysis maps periods of joint visual attention between participants during the word learning and the Jenga game, using head-mounted eye trackers to assess each participant’s first-person viewpoint during the session. We are also analyzing the coherence in brain activity between participants during novel word-learning and Jenga playing. The first hypothesis is that visual joint attention during the session will be positively correlated with both the number of words learned and with the number of blocks moved during Jenga before the tower falls. The next hypothesis is that successful communication of new words and success in the game will each be positively correlated with synchronized brain activity between the parent and child/the adult friends in cortical regions underlying social cognition, semantic processing, and visual processing. This study probes both the neural and behavioral mechanisms of learning and cooperation in a naturalistic, interactive and developmental context.

Keywords: communication, cooperation, development, interaction, neuroscience

Procedia PDF Downloads 230
181 Plasma Levels of Collagen Triple Helix Repeat Containing 1 (CTHRC1) as a Potential Biomarker in Interstitial Lung Disease

Authors: Rijnbout-St.James Willem, Lindner Volkhard, Scholand Mary Beth, Ashton M. Tillett, Di Gennaro Michael Jude, Smith Silvia Enrica

Abstract:

Introduction: Fibrosing lung diseases are characterized by changes in the lung interstitium and are classified based on etiology: 1) environmental/exposure-related, 2) autoimmune-related, 3) sarcoidosis, 4) interstitial pneumonia, and 4) idiopathic. Among interstitial lung diseases (ILD) idiopathic forms, idiopathic pulmonary fibrosis (IPF) is the most severe. Pathogenesis of IPF is characterized by an increased presence of proinflammatory mediators, resulting in alveolar injury, where injury to alveolar epithelium precipitates an increase in collagen deposition, subsequently thickening the alveolar septum and decreasing gas exchange. Identifying biomarkers implicated in the pathogenesis of lung fibrosis is key to developing new therapies and improving the efficacy of existing therapies. The transforming growth factor-beta (TGF-B1), a mediator of tissue repair associated with WNT5A signaling, is partially responsible for fibroblast proliferation in ILD and is the target of Pirfenidone, one of the antifibrotic therapies used for patients with IPF. Canonical TGF-B signaling is mediated by the proteins SMAD 2/3, which are, in turn, indirectly regulated by Collagen Triple Helix Repeat Containing 1 (CTHRC1). In this study, we tested the following hypotheses: 1) CTHRC1 is more elevated in the ILD cohort compared to unaffected controls, and 2) CTHRC1 is differently expressed among ILD types. Material and Methods: CTHRC1 levels were measured by ELISA in 171 plasma samples from the deidentified University of Utah ILD cohort. Data represent a cohort of 131 ILD-affected participants and 40 unaffected controls. CTHRC1 samples were categorized by a pulmonologist based on affectation status and disease subtypes: IPF (n = 45), sarcoidosis (4), nonspecific interstitial pneumonia (16), hypersensitivity pneumonitis (n = 7), interstitial pneumonia (n=13), autoimmune (n = 15), other ILD - a category that includes undifferentiated ILD diagnoses (n = 31), and unaffected controls (n = 40). We conducted a single-factor ANOVA of plasma CTHRC1 levels to test whether CTHRC1 variance among affected and non-affected participants is statistically significantly different. In-silico analysis was performed with Ingenuity Pathway Analysis® to characterize the role of CTHRC1 in the pathway of lung fibrosis. Results: Statistical analyses of CTHRC1 in plasma samples indicate that the average CTHRC1 level is significantly higher in ILD-affected participants than controls, with the autoimmune ILD being higher than other ILD types, thus supporting our hypotheses. In-silico analyses show that CTHRC1 indirectly activates and phosphorylates SMAD3, which in turn cross-regulates TGF-B1. CTHRC1 also may regulate the expression and transcription of TGFB-1 via WNT5A and its regulatory relationship with CTNNB1. Conclusion: In-silico pathway analyses demonstrate that CTHRC1 may be an important biomarker in ILD. Analysis of plasma samples indicates that CTHRC1 expression is positively associated with ILD affectation, with autoimmune ILD having the highest average CTHRC1 values. While characterizing CTHRC1 levels in plasma can help to differentiate among ILD types and predict response to Pirfenidone, the extent to which plasma CTHRC1 level is a function of ILD severity or chronicity is unknown.

Keywords: interstitial lung disease, CTHRC1, idiopathic pulmonary fibrosis, pathway analyses

Procedia PDF Downloads 168
180 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 48
179 Developing Social Responsibility Values in Nascent Entrepreneurs through Role-Play: An Explorative Study of University Students in the United Kingdom

Authors: David W. Taylor, Fernando Lourenço, Carolyn Branston, Paul Tucker

Abstract:

There are an increasing number of students at Universities in the United Kingdom engaging in entrepreneurship role-play to explore business start-up as a career alternative to employment. These role-play activities have been shown to have a positive influence on students’ entrepreneurial intentions. Universities also play a role in developing graduates’ awareness of social responsibility. However, social responsibility is often missing from these entrepreneurship role-plays. It is important that these role-play activities include the development of values that support social responsibility, in-line with those running hybrid, humane and sustainable enterprises, and not simply focus on profit. The Young Enterprise (YE) Start-Up programme is an example of a role-play activity that is gaining in popularity amongst United Kingdom Universities seeking ways to give students insight into a business start-up. A Post-92 University in the North-West of England has adapted the traditional YE Directorship roles (e.g., Marketing Director, Sales Director) by including a Corporate Social Responsibility (CSR) Director in all of the team-based YE Start-Up businesses. The aim for introducing this Directorship was to observe if such a role would help create a more socially responsible value-system within each company and in turn shape business decisions. This paper investigates role-play as a tool to help enterprise educators develop socially responsible attitudes and values in nascent entrepreneurs. A mixed qualitative methodology approach has been used, which includes interviews, role-play, and reflection, to help students develop positive value characteristics through the exploration of unethical and selfish behaviors. The initial findings indicate that role-play helped CSR Directors learn and gain insights into the importance of corporate social responsibility, influenced the values and actions of their YE Start-Ups, and increased the likelihood that if the participants were to launch a business post-graduation, that the intent would be for the business to be socially responsible. These findings help inform educators on how to develop socially responsible nascent entrepreneurs within a traditionally profit orientated business model.

Keywords: student entrepreneurship, young enterprise, social responsibility, role-play, values

Procedia PDF Downloads 125
178 Pakistan’s Counterinsurgency Operations: A Case Study of Swat

Authors: Arshad Ali

Abstract:

The Taliban insurgency in Swat which started apparently as a social movement in 2004 transformed into an anti-Pakistan Islamist insurgency by joining hands with the Tehrik-e-Taliban Pakistan (TTP) upon its formation in 2007. It quickly spread beyond Swat by 2009 making Swat the second stronghold of TTP after FATA. It prompted the Pakistan military to launch a full-scale counterinsurgency military operation code named Rah-i-Rast to regain the control of Swat. Operation Rah-i-Rast was successful not only in restoring the writ of the State but more importantly in creating a consensus against the spread of Taliban insurgency in Pakistan at political, social and military levels. This operation became a test case for civilian government and military to seek for a sustainable solution combating the TTP insurgency in the north-west of Pakistan. This study analyzes why the counterinsurgency operation Rah-i-Rast was successful and why the previous ones came into failure. The study also explores factors which created consensus against the Taliban insurgency at political and social level as well as reasons which hindered such a consensual approach in the past. The study argues that the previous initiatives failed due to various factors including Pakistan army’s lack of comprehensive counterinsurgency model, weak political will and public support, and states negligence. Also, the initial counterinsurgency policies were ad-hoc in nature fluctuating between military operations and peace deals. After continuous failure, the military revisited its approach to counterinsurgency in the operation Rah-i-Rast. The security forces learnt from their past experiences and developed a pragmatic counterinsurgency model: ‘clear, hold, build, and transfer.’ The military also adopted the population-centric approach to provide security to the local people. This case Study of Swat evaluates the strengths and weaknesses of the Pakistan's counterinsurgency operations as well as peace agreements. It will analyze operation Rah-i-Rast in the light of David Galula’s model of counterinsurgency. Unlike existing literature, the study underscores the bottom up approach adopted by the Pakistan’s military and government by engaging the local population to sustain the post-operation stability in Swat. More specifically, the study emphasizes on the hybrid counterinsurgency model “clear, hold, and build and Transfer” in Swat.

Keywords: Insurgency, Counterinsurgency, clear, hold, build, transfer

Procedia PDF Downloads 331
177 Explosion Mechanics of Aluminum Plates Subjected to the Combined Effect of Blast Wave and Fragment Impact Loading: A Multicase Computational Modeling Study

Authors: Atoui Oussama, Maazoun Azer, Belkassem Bachir, Pyl Lincy, Lecompte David

Abstract:

For many decades, researchers have been focused on understanding the dynamic behavior of different structures and materials subjected to fragment impact or blast loads separately. The explosion mechanics, as well as the impact physics studies dealing with the numerical modeling of the response of protective structures under the synergistic effect of a blast wave and the impact of fragments, are quite limited in the literature. This article numerically evaluates the nonlinear dynamic behavior and damage mechanisms of Aluminum plates EN AW-1050A- H24 under different combined loading scenarios varied by the sequence of the applied loads using the commercial software LS-DYNA. For one hand, with respect to the terminal ballistic field investigations, a Lagrangian (LAG) formulation is used to evaluate the different failure modes of the target material in case of a fragment impact. On the other hand, with respect to the blast field analysis, an Arbitrary Lagrangian-Eulerian (ALE) formulation is considered to study the fluid-structure interaction (FSI) of the shock wave and the plate in case of a blast loading. Four different loading scenarios are considered: (1) only blast loading, (2) only fragment impact, (3) blast loading followed by a fragment impact and (4) a fragment impact followed by blast loading. From the numerical results, it was observed that when the impact load is applied to the plate prior to the blast load, it suffers more severe damage due to the hole enlargement phenomenon and the effects of crack propagation on the circumference of the damaged zone. Moreover, it was found that the hole from the fragment impact loading was enlarged to about three times in diameter as compared to the diameter of the projectile. The validation of the proposed computational model is based in part on previous experimental data obtained by the authors and in the other part on experimental data obtained from the literature. A good correspondence between the numerical and experimental results is found.

Keywords: computational analysis, combined loading, explosion mechanics, hole enlargement phenomenon, impact physics, synergistic effect, terminal ballistic

Procedia PDF Downloads 156
176 Biophysical Assessment of the Ecological Condition of Wetlands in the Parkland and Grassland Natural Regions of Alberta, Canada

Authors: Marie-Claude Roy, David Locky, Ermias Azeria, Jim Schieck

Abstract:

It is estimated that up to 70% of the wetlands in the Parkland and Grassland natural regions of Alberta have been lost due to various land-use activities. These losses include ecosystem function and services they once provided. Those wetlands remaining are often embedded in a matrix of human-modified habitats and despite efforts taken to protect them the effects of land-uses on wetland condition and function remain largely unknown. We used biophysical field data and remotely-sensed human footprint data collected at 322 open-water wetlands by the Alberta Biodiversity Monitoring Institute (ABMI) to evaluate the impact of surrounding land use on the physico-chemistry characteristics and plant functional traits of wetlands. Eight physio-chemistry parameters were assessed: wetland water depth, water temperature, pH, salinity, dissolved oxygen, total phosphorus, total nitrogen, and dissolved organic carbon. Three plant functional traits were evaluated: 1) origin (native and non-native), 2) life history (annual, biennial, and perennial), and 3) habitat requirements (obligate-wetland and obligate-upland). Intensity land-use was quantified within a 250-meter buffer around each wetland. Ninety-nine percent of wetlands in the Grassland and Parkland regions of Alberta have land-use activities in their surroundings, with most being agriculture-related. Total phosphorus in wetlands increased with the cover of surrounding agriculture, while salinity, total nitrogen, and dissolved organic carbon were positively associated with the degree of soft-linear (e.g. pipelines, trails) land-uses. The abundance of non-native and annual/biennial plants increased with the amount of agriculture, while urban-industrial land-use lowered abundance of natives, perennials, and obligate wetland plants. Our study suggests that land-use types surrounding wetlands affect the physicochemical and biological conditions of wetlands. This research suggests that reducing human disturbances through reclamation of wetland buffers may enhance the condition and function of wetlands in agricultural landscapes.

Keywords: wetlands, biophysical assessment, land use, grassland and parkland natural regions

Procedia PDF Downloads 302
175 Occupational Challenges and Adjustment Strategies of Internally Displaced Persons in Abuja, Nigeria

Authors: David Obafemi Adebayo

Abstract:

An occupational challenge has been identified as one of the factors that could cripple set goals and life ambitions of an Internally Displaced Person (IDP). The main thrust of this study is therefore, explore the use of life support/adjustment strategy with a view to repositioning these internally displaced persons in Nigeria in revamping their goals and achieving their life-long ambitions. The study intends to investigate whether there exist, on the basis of gender, religion, years of working experience and educational qualification any significant difference in the occupational challenges and adjustment strategies of IDPs. The study being descriptive of survey type adopted a multi-stage sampling technique to select the minimum of 400 internally displaced persons from IDP camps in Yimitu Village, Waru District in the Federal Capital Territory (FCT), Abuja. The research instrument used for the study was a researcher-designed questionnaire entitled “Questionnaire on Occupational Challenges and Adjustment Strategy of Internally Displaced Persons (QOCASIDPs)”. Eight null hypotheses were tested at 0.05 alpha levels of significance. Frequency counts and percentages, means and rank order, t-test, Analysis of Variance (ANOVA) and Duncan Multiple Range Test (DMRT) (where applicable) were employed to analyze the data. The Study determined whether occupational challenges of internally displaced persons included loss of employment, vocational discrimination, marginalization by employers of labour, isolation due to joblessness, lack of occupational freedom, which were found to be true. The results were discussed in line with the findings. The study established the place of notable adjustment strategies adopted by internally displaced person like engaging in petty trading, sourcing soft loans from NGOs, setting up small-scale businesses in groups, acquiring new skills, engaging in further education, among others. The study established that there was no significant difference in the occupational challenges of IDPs on the basis of years of working experience and highest educational qualifications, though there was significant difference on the basis of gender as well as religion. Based on the findings of the study, recommendations were made.

Keywords: internally displaced persons, occupational challenges, adjustment strategies, Abuja-Nigeria

Procedia PDF Downloads 337
174 MAOD Is Estimated by Sum of Contributions

Authors: David W. Hill, Linda W. Glass, Jakob L. Vingren

Abstract:

Maximal accumulated oxygen deficit (MAOD), the gold standard measure of anaerobic capacity, is the difference between the oxygen cost of exhaustive severe intensity exercise and the accumulated oxygen consumption (O2; mL·kg–1). In theory, MAOD can be estimated as the sum of independent estimates of the phosphocreatine and glycolysis contributions, which we refer to as PCr+glycolysis. Purpose: The purpose was to test the hypothesis that PCr+glycolysis provides a valid measure of anaerobic capacity in cycling and running. Methods: The participants were 27 women (mean ± SD, age 22 ±1 y, height 165 ± 7 cm, weight 63.4 ± 9.7 kg) and 25 men (age 22 ± 1 y, height 179 ± 6 cm, weight 80.8 ± 14.8 kg). They performed two exhaustive cycling and running tests, at speeds and work rates that were tolerable for ~5 min. The rate of oxygen consumption (VO2; mL·kg–1·min–1) was measured in warmups, in the tests, and during 7 min of recovery. Fingerprick blood samples obtained after exercise were analysed to determine peak blood lactate concentration (PeakLac). The VO2 response in exercise was fitted to a model, with a fast ‘primary’ phase followed by a delayed ‘slow’ component, from which was calculated the accumulated O2 and the excess O2 attributable to the slow component. The VO2 response in recovery was fitted to a model with a fast phase and slow component, sharing a common time delay. Oxygen demand (in mL·kg–1·min–1) was determined by extrapolation from steady-state VO2 in warmups; the total oxygen cost (in mL·kg–1) was determined by multiplying this demand by time to exhaustion and adding the excess O2; then, MAOD was calculated as total oxygen cost minus accumulated O2. The phosphocreatine contribution (area under the fast phase of the post-exercise VO2) and the glycolytic contribution (converted from PeakLac) were summed to give PCr+glycolysis. There was not an interaction effect involving sex, so values for anaerobic capacity were examined using a two-way ANOVA, with repeated measures across method (PCr+glycolysis vs MAOD) and mode (cycling vs running). Results: There was a significant effect only for exercise mode. There was no difference between MAOD and PCr+glycolysis: values were 59 ± 6 mL·kg–1 and 61 ± 8 mL·kg–1 in cycling and 78 ± 7 mL·kg–1 and 75 ± 8 mL·kg–1 in running. Discussion: PCr+glycolysis is a valid measure of anaerobic capacity in cycling and running, and it is as valid for women as for men.

Keywords: alactic, anaerobic, cycling, ergometer, glycolysis, lactic, lactate, oxygen deficit, phosphocreatine, running, treadmill

Procedia PDF Downloads 111
173 Rhizoremediation of Contaminated Soils in Sub-Saharan Africa: Experimental Insights of Microbe Growth and Effects of Paspalum Spp. for Degrading Hydrocarbons in Soils

Authors: David Adade-Boateng, Benard Fei Baffoe, Colin A. Booth, Michael A. Fullen

Abstract:

Remediation of diesel fuel, oil and grease in contaminated soils obtained from a mine site in Ghana are explored using rhizoremediation technology with different levels of nutrient amendments (i.e. N (nitrogen) in Compost (0.2, 0.5 and 0.8%), Urea (0.2, 0.5 and 0.8%) and Topsoil (0.2, 0.5 and 0.8%)) for a native species. A Ghanaian native grass species, Paspalum spp. from the Poaceae family, indicative across Sub-Saharan Africa, was selected following the development of essential and desirable growth criteria. Vegetative parts of the species were subjected to ten treatments in a Randomized Complete Block Design (RCBD) in three replicates. The plant-associated microbial community was examined in Paspalum spp. An assessment of the influence of Paspalum spp on the abundance and activity of micro-organisms in the rhizosphere revealed a build-up of microbial communities over a three month period. This was assessed using the MPN method, which showed rhizospheric samples from the treatments were significantly different (P <0.05). Multiple comparisons showed how microbial populations built-up in the rhizosphere for the different treatments. Treatments G (0.2% compost), H (0.5% compost) and I (0.8% compost) performed significantly better done other treatments, while treatments D (0.2% topsoil) and F (0.8% topsoil) were insignificant. Furthermore, treatment A (0.2% urea), B (0.5% urea), C (0.8% urea) and E (0.5% topsoil) also performed the same. Residual diesel and oil concentrations (as total petroleum hydrocarbons, TPH and oil and grease) were measured using infra-red spectroscopy and gravimetric methods, respectively. The presence of single species successfully enhanced the removal of hydrocarbons from soil. Paspalum spp. subjected to compost levels (0.5% and 0.8%) and topsoil levels (0.5% and 0.8%) showed significantly lower residual hydrocarbon concentrations compared to those treated with Urea. A strong relationship (p<0.001) between the abundance of hydrocarbon degrading micro-organisms in the rhizosphere and hydrocarbon biodegradation was demonstrated for rhizospheric samples with treatment G (0.2% compost), H (0.5% compost) and I (0.8% compost) (P <0.001). The same level of amendment with 0.8% compost (N-level) can improve the application effectiveness. These findings have wide-reaching implications for the environmental management of soils contaminated by hydrocarbons in Sub-Saharan Africa. However, it is necessary to further investigate the in situ rhizoremediation potential of Paspalum spp. at the field scale.

Keywords: rhizoremediation, microbial population, rhizospheric sample, treatments

Procedia PDF Downloads 283
172 Micelles Made of Pseudo-Proteins for Solubilization of Hydrophobic Biologicals

Authors: Sophio Kobauri, David Tugushi, Vladimir P. Torchilin, Ramaz Katsarava

Abstract:

Hydrophobic / hydrophilically modified functional polymers are of high interest in modern biomedicine due to their ability to solubilize water-insoluble / poorly soluble (hydrophobic) drugs. Among the many approaches that are being developed in this direction, one of the most effective methods is the use of polymeric micelles (PMs) (micelles formed by amphiphilic block-copolymers) for solubilization of hydrophobic biologicals. For therapeutic purposes, PMs are required to be stable and biodegradable, although quite a few amphiphilic block-copolymers are described capable of forming stable micelles with good solubilization properties. For obtaining micelle-forming block-copolymers, polyethylene glycol (PEG) derivatives are desirable to use as hydrophilic shell because it represents the most popular biocompatible hydrophilic block and various hydrophobic blocks (polymers) can be attached to it. Although the construction of the hydrophobic core, due to the complex requirements and micelles structure development, is the very actual and the main problem for nanobioengineers. Considering the above, our research goal was obtaining biodegradable micelles for the solubilization of hydrophobic drugs and biologicals. For this purpose, we used biodegradable polymers– pseudo-proteins (PPs)(synthesized with naturally occurring amino acids and other non-toxic building blocks, such as fatty diols and dicarboxylic acids) as hydrophobic core since these polymers showed reasonable biodegradation rates and excellent biocompatibility. In the present study, we used the hydrophobic amino acid – L-phenylalanine (MW 4000-8000Da) instead of L-leucine. Amino-PEG (MW 2000Da) was used as hydrophilic fragments for constructing the suitable micelles. The molecular weight of PP (the hydrophobic core of micelle) was regulated by variation of used monomers ratios. Micelles were obtained by dissolving of synthesized amphiphilic polymer in water. The micelle-forming property was tested using dynamic light scattering (Malvern zetasizer NanoZSZEN3600). The study showed that obtaining amphiphilic block-copolymer form stable neutral micelles 100 ± 7 nm in size at 10mg/mL concentration, which is considered as an optimal range for pharmaceutical micelles. The obtained preliminary data allow us to conclude that the obtained micelles are suitable for the delivery of poorly water-soluble drugs and biologicals.

Keywords: amino acid – L-phenylalanine, pseudo-proteins, amphiphilic block-copolymers, biodegradable micelles

Procedia PDF Downloads 114
171 Improving Rural Access to Specialist Emergency Mental Health Care: Using a Time and Motion Study in the Evaluation of a Telepsychiatry Program

Authors: Emily Saurman, David Lyle

Abstract:

In Australia, a well serviced rural town might have a psychiatrist visit once-a-month with more frequent visits from a psychiatric nurse, but many have no resident access to mental health specialists. Access to specialist care, would not only reduce patient distress and benefit outcomes, but facilitate the effective use of limited resources. The Mental Health Emergency Care-Rural Access Program (MHEC-RAP) was developed to improve access to specialist emergency mental health care in rural and remote communities using telehealth technologies. However, there has been no current benchmark to gauge program efficiency or capacity; to determine whether the program activity is justifiably sufficient. The evaluation of MHEC-RAP used multiple methods and applied a modified theory of access to assess the program and its aim of improved access to emergency mental health care. This was the first evaluation of a telepsychiatry service to include a time and motion study design examining program time expenditure, efficiency, and capacity. The time and motion study analysis was combined with an observational study of the program structure and function to assess the balance between program responsiveness and efficiency. Previous program studies have demonstrated that MHEC-RAP has improved access and is used and effective. The findings from the time and motion study suggest that MHEC-RAP has the capacity to manage increased activity within the current model structure without loss to responsiveness or efficiency in the provision of care. Enhancing program responsiveness and efficiency will also support a claim of the program’s value for money. MHEC-RAP is a practical telehealth solution for improving access to specialist emergency mental health care. The findings from this evaluation have already attracted the attention of other regions in Australia interested in implementing emergency telepsychiatry programs and are now informing the progressive establishment of mental health resource centres in rural New South Wales. Like MHEC-RAP, these centres will provide rapid, safe, and contextually relevant assessments and advice to support local health professionals to manage mental health emergencies in the smaller rural emergency departments. Sharing the application of this methodology and research activity may help to improve access to and future evaluations of telehealth and telepsychiatry services for others around the globe.

Keywords: access, emergency, mental health, rural, time and motion

Procedia PDF Downloads 208
170 Roboweeder: A Robotic Weeds Killer Using Electromagnetic Waves

Authors: Yahoel Van Essen, Gordon Ho, Brett Russell, Hans-Georg Worms, Xiao Lin Long, Edward David Cooper, Avner Bachar

Abstract:

Weeds reduce farm and forest productivity, invade crops, smother pastures and some can harm livestock. Farmers need to spend a significant amount of money to control weeds by means of biological, chemical, cultural, and physical methods. To solve the global agricultural labor shortage and remove poisonous chemicals, a fully autonomous, eco-friendly, and sustainable weeding technology is developed. This takes the form of a weeding robot, ‘Roboweeder’. Roboweeder includes a four-wheel-drive self-driving vehicle, a 4-DOF robotic arm which is mounted on top of the vehicle, an electromagnetic wave generator (magnetron) which is mounted on the “wrist” of the robotic arm, 48V battery packs, and a control/communication system. Cameras are mounted on the front and two sides of the vehicle. Using image processing and recognition, distinguish types of weeds are detected before being eliminated. The electromagnetic wave technology is applied to heat the individual weeds and clusters dielectrically causing them to wilt and die. The 4-DOF robotic arm was modeled mathematically based on its structure/mechanics, each joint’s load, brushless DC motor and worm gear’ characteristics, forward kinematics, and inverse kinematics. The Proportional-Integral-Differential control algorithm is used to control the robotic arm’s motion to ensure the waveguide aperture pointing to the detected weeds. GPS and machine vision are used to traverse the farm and avoid obstacles without the need of supervision. A Roboweeder prototype has been built. Multiple test trials show that Roboweeder is able to detect, point, and kill the pre-defined weeds successfully although further improvements are needed, such as reducing the “weeds killing” time and developing a new waveguide with a smaller waveguide aperture to avoid killing crops surrounded. This technology changes the tedious, time consuming and expensive weeding processes, and allows farmers to grow more, go organic, and eliminate operational headaches. A patent of this technology is pending.

Keywords: autonomous navigation, machine vision, precision heating, sustainable and eco-friendly

Procedia PDF Downloads 207
169 Impact of Ocean Acidification on Gene Expression Dynamics during Development of the Sea Urchin Species Heliocidaris erythrogramma

Authors: Hannah R. Devens, Phillip L. Davidson, Dione Deaker, Kathryn E. Smith, Gregory A. Wray, Maria Byrne

Abstract:

Marine invertebrate species with calcifying larvae are especially vulnerable to ocean acidification (OA) caused by rising atmospheric CO₂ levels. Acidic conditions can delay development, suppress metabolism, and decrease the availability of carbonate ions in the ocean environment for skeletogenesis. These stresses often result in increased larval mortality, which may lead to significant ecological consequences including alterations to the larval settlement, population distribution, and genetic connectivity. Importantly, many of these physiological and developmental effects are caused by genetic and molecular level changes. Although many studies have examined the effect of near-future oceanic pH levels on gene expression in marine invertebrates, little is known about the impact of OA on gene expression in a developmental context. Here, we performed mRNA-sequencing to investigate the impact of environmental acidity on gene expression across three developmental stages in the sea urchin Heliocidaris erythrogramma. We collected RNA from gastrula, early larva, and 1-day post-metamorphic juvenile sea urchins cultured at present-day and predicted future oceanic pH levels (pH 8.1 and 7.7, respectively). We assembled an annotated reference transcriptome encompassing development from egg to ten days post-metamorphosis by combining these data with datasets from two previous developmental transcriptomic studies of H. erythrogramma. Differential gene expression and time course analyses between pH conditions revealed significant alterations to developmental transcription that are potentially associated with pH stress. Consistent with previous investigations, genes involved in biomineralization and ion transport were significantly upregulated under acidic conditions. Differences in gene expression between the two pH conditions became more pronounced post-metamorphosis, suggesting a development-dependent effect of OA on gene expression. Furthermore, many differences in gene expression later in development appeared to be a result of broad downregulation at pH 7.7: of 539 genes differentially expressed at the juvenile stage, 519 of these were lower in the acidic condition. Time course comparisons between pH 8.1 and 7.7 samples also demonstrated over 500 genes were more lowly expressed in pH 7.7 samples throughout development. Of the genes exhibiting stage-dependent expression level changes, over 15% of these diverged from the expected temporal pattern of expression in the acidic condition. Through these analyses, we identify novel candidate genes involved in development, metabolism, and transcriptional regulation that are possibly affected by pH stress. Our results demonstrate that pH stress significantly alters gene expression dynamics throughout development. A large number of genes differentially expressed between pH conditions in juveniles relative to earlier stages may be attributed to the effects of acidity on transcriptional regulation, as a greater proportion of mRNA at this later stage has been nascent transcribed rather than maternally loaded. Also, the overall downregulation of many genes in the acidic condition suggests that OA-induced developmental delay manifests as suppressed mRNA expression, possibly from lower transcription rates or increased mRNA degradation in the acidic environment. Further studies will be necessary to determine in greater detail the extent of OA effects on early developing marine invertebrates.

Keywords: development, gene expression, ocean acidification, RNA-sequencing, sea urchins

Procedia PDF Downloads 130
168 Using The Flight Heritage From >150 Electric Propulsion Systems To Design The Next Generation Field Emission Electric Propulsion Thrusters

Authors: David Krejci, Tony Schönherr, Quirin Koch, Valentin Hugonnaud, Lou Grimaud, Alexander Reissner, Bernhard Seifert

Abstract:

In 2018 the NANO thruster became the first Field Emission Electric Propulsion (FEEP) system ever to be verified in space in an In-Orbit Demonstration mission conducted together with Fotec. Since then, 160 additional ENPULSION NANO propulsion systems have been deployed in orbit on 73 different spacecraft across multiple customers and missions. These missions included a variety of different satellite bus sizes ranging from 3U Cubesats to >100kg buses, and different orbits in Low Earth Orbit and Geostationary Earth orbit, providing an abundance of on orbit data for statistical analysis. This large-scale industrialization and flight heritage allows for a holistic way of gathering data from testing, integration and operational phases, deriving lessons learnt over a variety of different mission types, operator approaches, use cases and environments. Based on these lessons learnt a new generation of propulsion systems is developed, addressing key findings from the large NANO heritage and adding new capabilities, including increased resilience, thrust vector steering and increased power and thrust level. Some of these successor products have already been validated in orbit, including the MICRO R3 and the NANO AR3. While the MICRO R3 features increased power and thrust level, the NANO AR3 is a successor of the heritage NANO thruster with added thrust vectoring capability. 5 NANO AR3 have been launched to date on two different spacecraft. This work presents flight telemetry data of ENPULSION NANO systems and onorbit statistical data of the ENPULSION NANO as well as lessons learnt during onorbit operations, customer assembly, integration and testing support and ground test campaigns conducted at different facilities. We discuss how transfer of lessons learnt and operational improvement across independent missions across customers has been accomplished. Building on these learnings and exhaustive heritage, we present the design of the new generation of propulsion systems that increase the power and thrust level of FEEP systems to address larger spacecraft buses.

Keywords: FEEP, field emission electric propulsion, electric propulsion, flight heritage

Procedia PDF Downloads 62
167 Experimental Quantification of the Intra-Tow Resin Storage Evolution during RTM Injection

Authors: Mathieu Imbert, Sebastien Comas-Cardona, Emmanuelle Abisset-Chavanne, David Prono

Abstract:

Short cycle time Resin Transfer Molding (RTM) applications appear to be of great interest for the mass production of automotive or aeronautical lightweight structural parts. During the RTM process, the two components of a resin are mixed on-line and injected into the cavity of a mold where a fibrous preform has been placed. Injection and polymerization occur simultaneously in the preform inducing evolutions of temperature, degree of cure and viscosity that furthermore affect flow and curing. In order to adjust the processing conditions to reduce the cycle time, it is, therefore, essential to understand and quantify the physical mechanisms occurring in the part during injection. In a previous study, a dual-scale simulation tool has been developed to help determining the optimum injection parameters. This tool allows tracking finely the repartition of the resin and the evolution of its properties during reactive injections with on-line mixing. Tows and channels of the fibrous material are considered separately to deal with the consequences of the dual-scale morphology of the continuous fiber textiles. The simulation tool reproduces the unsaturated area at the flow front, generated by the tow/channel difference of permeability. Resin “storage” in the tows after saturation is also taken into account as it may significantly affect the repartition and evolution of the temperature, degree of cure and viscosity in the part during reactive injections. The aim of the current study is, thanks to experiments, to understand and quantify the “storage” evolution in the tows to adjust and validate the numerical tool. The presented study is based on four experimental repeats conducted on three different types of textiles: a unidirectional Non Crimp Fabric (NCF), a triaxial NCF and a satin weave. Model fluids, dyes and image analysis, are used to study quantitatively, the resin flow in the saturated area of the samples. Also, textiles characteristics affecting the resin “storage” evolution in the tows are analyzed. Finally, fully coupled on-line mixing reactive injections are conducted to validate the numerical model.

Keywords: experimental, on-line mixing, high-speed RTM process, dual-scale flow

Procedia PDF Downloads 142
166 Optimization of Maintenance of PV Module Arrays Based on Asset Management Strategies: Case of Study

Authors: L. Alejandro Cárdenas, Fernando Herrera, David Nova, Juan Ballesteros

Abstract:

This paper presents a methodology to optimize the maintenance of grid-connected photovoltaic systems, considering the cleaning and module replacement periods based on an asset management strategy. The methodology is based on the analysis of the energy production of the PV plant, the energy feed-in tariff, and the cost of cleaning and replacement of the PV modules, with the overall revenue received being the optimization variable. The methodology is evaluated as a case study of a 5.6 kWp solar PV plant located on the Bogotá campus of the Universidad Nacional de Colombia. The asset management strategy implemented consists of assessing the PV modules through visual inspection, energy performance analysis, pollution, and degradation. Within the visual inspection of the plant, the general condition of the modules and the structure is assessed, identifying dust deposition, visible fractures, and water accumulation on the bottom. The energy performance analysis is performed with the energy production reported by the monitoring systems and compared with the values estimated in the simulation. The pollution analysis is performed using the soiling rate due to dust accumulation, which can be modelled by a black box with an exponential function dependent on historical pollution values. The pollution rate is calculated with data collected from the energy generated during two years in a photovoltaic plant on the campus of the National University of Colombia. Additionally, the alternative of assessing the temperature degradation of the PV modules is evaluated by estimating the cell temperature with parameters such as ambient temperature and wind speed. The medium-term energy decrease of the PV modules is assessed with the asset management strategy by calculating the health index to determine the replacement period of the modules due to degradation. This study proposes a tool for decision making related to the maintenance of photovoltaic systems. The above, projecting the increase in the installation of solar photovoltaic systems in power systems associated with the commitments made in the Paris Agreement for the reduction of CO2 emissions. In the Colombian context, it is estimated that by 2030, 12% of the installed power capacity will be solar PV.

Keywords: asset management, PV module, optimization, maintenance

Procedia PDF Downloads 14
165 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 59
164 Mathematics as the Foundation for the STEM Disciplines: Different Pedagogical Strategies Addressed

Authors: Marion G. Ben-Jacob, David Wang

Abstract:

There is a mathematics requirement for entry level college and university students, especially those who plan to study STEM (Science, Technology, Engineering and Mathematics). Most of them take College Algebra, and to continue their studies, they need to succeed in this course. Different pedagogical strategies are employed to promote the success of our students. There is, of course, the Traditional Method of teaching- lecture, examples, problems for students to solve. The Emporium Model, another pedagogical approach, replaces traditional lectures with a learning resource center model featuring interactive software and on-demand personalized assistance. This presentation will compare these two methods of pedagogy and the study done with its results on this comparison. Math is the foundation for science, technology, and engineering. Its work is generally used in STEM to find patterns in data. These patterns can be used to test relationships, draw general conclusions about data, and model the real world. In STEM, solutions to problems are analyzed, reasoned, and interpreted using math abilities in a assortment of real-world scenarios. This presentation will examine specific examples of how math is used in the different STEM disciplines. Math becomes practical in science when it is used to model natural and artificial experiments to identify a problem and develop a solution for it. As we analyze data, we are using math to find the statistical correlation between the cause of an effect. Scientists who use math include the following: data scientists, scientists, biologists and geologists. Without math, most technology would not be possible. Math is the basis of binary, and without programming, you just have the hardware. Addition, subtraction, multiplication, and division is also used in almost every program written. Mathematical algorithms are inherent in software as well. Mechanical engineers analyze scientific data to design robots by applying math and using the software. Electrical engineers use math to help design and test electrical equipment. They also use math when creating computer simulations and designing new products. Chemical engineers often use mathematics in the lab. Advanced computer software is used to aid in their research and production processes to model theoretical synthesis techniques and properties of chemical compounds. Mathematics mastery is crucial for success in the STEM disciplines. Pedagogical research on formative strategies and necessary topics to be covered are essential.

Keywords: emporium model, mathematics, pedagogy, STEM

Procedia PDF Downloads 51
163 FEM and Experimental Modal Analysis of Computer Mount

Authors: Vishwajit Ghatge, David Looper

Abstract:

Over the last few decades, oilfield service rolling equipment has significantly increased in weight, primarily because of emissions regulations, which require larger/heavier engines, larger cooling systems, and emissions after-treatment systems, in some cases, etc. Larger engines cause more vibration and shock loads, leading to failure of electronics and control systems. If the vibrating frequency of the engine matches the system frequency, high resonance is observed on structural parts and mounts. One such existing automated control equipment system comprising wire rope mounts used for mounting computers was designed approximately 12 years ago. This includes the use of an industrial- grade computer to control the system operation. The original computer had a smaller, lighter enclosure. After a few years, a newer computer version was introduced, which was 10 lbm heavier. Some failures of internal computer parts have been documented for cases in which the old mounts were used. Because of the added weight, there is a possibility of having the two brackets impact each other under off-road conditions, which causes a high shock input to the computer parts. This added failure mode requires validating the existing mount design to suit the new heavy-weight computer. This paper discusses the modal finite element method (FEM) analysis and experimental modal analysis conducted to study the effects of vibration on the wire rope mounts and the computer. The existing mount was modelled in ANSYS software, and resultant mode shapes and frequencies were obtained. The experimental modal analysis was conducted, and actual frequency responses were observed and recorded. Results clearly revealed that at resonance frequency, the brackets were colliding and potentially causing damage to computer parts. To solve this issue, spring mounts of different stiffness were modeled in ANSYS software, and the resonant frequency was determined. Increasing the stiffness of the system increased the resonant frequency zone away from the frequency window at which the engine showed heavy vibrations or resonance. After multiple iterations in ANSYS software, the stiffness of the spring mount was finalized, which was again experimentally validated.

Keywords: experimental modal analysis, FEM Modal Analysis, frequency, modal analysis, resonance, vibration

Procedia PDF Downloads 303
162 LWD Acquisition of Caliper and Drilling Mechanics in a Geothermal Well, A Case Study in Sorik Marapi Field – Indonesia

Authors: Vinda B. Manurung, Laila Warkhaida, David Hutabarat, Sentanu Wisnuwardhana, Christovik Simatupang, Dhani Sanjaya, Ashadi, Redha B. Putra, Kiki Yustendi

Abstract:

The geothermal drilling environment presents many obstacles that have limited the use of directional drilling and logging-while-drilling (LWD) technologies, such as borehole washout, mud losses, severe vibration, and high temperature. The case study presented in this paper demonstrates a practice to enhance data logging in geothermal drilling by deploying advanced telemetry and LWD technologies. This operation is aiming continuous improvement in geothermal drilling operations. The case study covers a 12.25-in. hole section of well XX-05 in Pad XX of the Sorik Marapi Geothermal Field. LWD string consists of electromagnetic (EM) telemetry, pressure while drilling (PWD), vibration (DDSr), and acoustic calliper (ACAL). Through this tool configuration, the operator acquired drilling mechanics and caliper logs in real-time and recorded mode, enabling effective monitoring of wellbore stability. Throughout the real-time acquisition, EM-PPM telemetry had provided a three times faster data rate to the surface unit. With the integration of Caliper data and Drilling mechanics data (vibration and ECD -equivalent circulating density), the borehole conditions were more visible to the directional driller, allowing for better control of drilling parameters to minimize vibration and achieve optimum hole cleaning in washed-out or tight formation sequences. After reaching well TD, the recorded data from the caliper sensor indicated an average of 8.6% washout for the entire 12.25-in. interval. Washout intervals were compared with loss occurrence, showing potential for the caliper to be used as an indirect indicator of fractured intervals and validating fault trend prognosis. This LWD case study has given added value in geothermal borehole characterization for both drilling operation and subsurface. Identified challenges while running LWD in this geothermal environment need to be addressed for future improvements, such as the effect of tool eccentricity and the impact of vibration. A perusal of both real-time and recorded drilling mechanics and caliper data has opened various possibilities for maximizing sensor usage in future wells.

Keywords: geothermal drilling, geothermal formation, geothermal technologies, logging-while-drilling, vibration, caliper, case study

Procedia PDF Downloads 98