Search results for: Andre Di Carlo
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 526

Search results for: Andre Di Carlo

76 Deorbiting Performance of Electrodynamic Tethers to Mitigate Space Debris

Authors: Giulia Sarego, Lorenzo Olivieri, Andrea Valmorbida, Carlo Bettanini, Giacomo Colombatti, Marco Pertile, Enrico C. Lorenzini

Abstract:

International guidelines recommend removing any artificial body in Low Earth Orbit (LEO) within 25 years from mission completion. Among disposal strategies, electrodynamic tethers appear to be a promising option for LEO, thanks to the limited storage mass and the minimum interface requirements to the host spacecraft. In particular, recent technological advances make it feasible to deorbit large objects with tether lengths of a few kilometers or less. To further investigate such an innovative passive system, the European Union is currently funding the project E.T.PACK – Electrodynamic Tether Technology for Passive Consumable-less Deorbit Kit in the framework of the H2020 Future Emerging Technologies (FET) Open program. The project focuses on the design of an end of life disposal kit for LEO satellites. This kit aims to deploy a taped tether that can be activated at the spacecraft end of life to perform autonomous deorbit within the international guidelines. In this paper, the orbital performance of the E.T.PACK deorbiting kit is compared to other disposal methods. Besides, the orbital decay prediction is parametrized as a function of spacecraft mass and tether system performance. Different values of length, width, and thickness of the tether will be evaluated for various scenarios (i.e., different initial orbital parameters). The results will be compared to other end-of-life disposal methods with similar allocated resources. The analysis of the more innovative system’s performance with the tape coated with a thermionic material, which has a low work-function (LWT), for which no active component for the cathode is required, will also be briefly discussed. The results show that the electrodynamic tether option can be a competitive and performant solution for satellite disposal compared to other deorbit technologies.

Keywords: deorbiting performance, H2020, spacecraft disposal, space electrodynamic tethers

Procedia PDF Downloads 156
75 Prophylactic Replacement of Voice Prosthesis: A Study to Predict Prosthesis Lifetime

Authors: Anne Heirman, Vincent van der Noort, Rob van Son, Marije Petersen, Lisette van der Molen, Gyorgy Halmos, Richard Dirven, Michiel van den Brekel

Abstract:

Objective: Voice prosthesis leakage significantly impacts laryngectomies patients' quality of life, causing insecurity and frequent unplanned hospital visits and costs. In this study, the concept of prophylactic voice prosthesis replacement was explored to prevent leakages. Study Design: A retrospective cohort study. Setting: Tertiary hospital. Methods: Device lifetimes and voice prosthesis replacements of a retrospective cohort, including all patients with laryngectomies between 2000 and 2012 in the Netherlands Cancer Institute, were used to calculate the number of needed voice prostheses per patient per year when preventing 70% of the leakages by prophylactic replacement. Various strategies for the timing of prophylactic replacement were considered: Adaptive strategies based on the individual patient’s history of replacement and fixed strategies based on the results of patients with similar voice prosthesis or treatment characteristics. Results: Patients used a median of 3.4 voice prostheses per year (range 0.1-48.1). We found a high inter-and intrapatient variability in device lifetime. When applying prophylactic replacement, this would become a median of 9.4 voice prostheses per year, which means replacement every 38 days, implying more than six additional voice prostheses per patient per year. The individual adaptive model showed that preventing 70% of the leakages was impossible for most patients, and only a median of 25% can be prevented. Monte-Carlo simulations showed that prophylactic replacement is not feasible due to the high Coefficient of Variation (Standard Deviation/Mean) in device lifetime. Conclusion: Based on our simulations, prophylactic replacement of voice prostheses is not feasible due to high inter-and intrapatient variation in device lifetime.

Keywords: voice prosthesis, voice rehabilitation, total laryngectomy, prosthetic leakage, device lifetime

Procedia PDF Downloads 116
74 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation

Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell

Abstract:

Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.

Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models

Procedia PDF Downloads 132
73 Application of Mathematical Models for Conducting Long-Term Metal Fume Exposure Assessments for Workers in a Shipbuilding Factory

Authors: Shu-Yu Chung, Ying-Fang Wang, Shih-Min Wang

Abstract:

To conduct long-term exposure assessments are important for workers exposed to chemicals with chronic effects. However, it usually encounters with several constrains, including cost, workers' willingness, and interference to work practice, etc., leading to inadequate long-term exposure data in the real world. In this study, an integrated approach was developed for conducting long-term exposure assessment for welding workers in a shipbuilding factory. A laboratory study was conducted to yield the fume generation rates under various operating conditions. The results and the measured environmental conditions were applied to the near field/far field (NF/FF) model for predicting long term fume exposures via the Monte Carlo simulation. Then, the predicted long-term concentrations were used to determine the prior distribution in Bayesian decision analysis (BDA). Finally, the resultant posterior distributions were used to assess the long-term exposure and serve as basis for initiating control strategies for shipbuilding workers. Results show that the NF/FF model was a suitable for predicting the exposures of metal contents containing in welding fume. The resultant posterior distributions could effectively assess the long-term exposures of shipbuilding welders. Welders' long-term Fe, Mn and Pb exposures were found with high possibilities to exceed the action level indicating preventive measures should be taken for reducing welders' exposures immediately. Though the resultant posterior distribution can only be regarded as the best solution based on the currently available predicting and monitoring data, the proposed integrated approach can be regarded as a possible solution for conducting long term exposure assessment in the field.

Keywords: Bayesian decision analysis, exposure assessment, near field and far field model, shipbuilding industry, welding fume

Procedia PDF Downloads 125
72 The Impact of Agricultural Product Export on Income and Employment in Thai Economy

Authors: Anucha Wittayakorn-Puripunpinyoo

Abstract:

The research objectives were 1) to study the situation and its trend of agricultural product export of Thailand 2) to study the impact of agricultural product export on income of Thai economy 3) the impact of agricultural product export on employment of Thai economy and 4) to find out the recommendations of agricultural product export policy of Thailand. In this research, secondary data were collected as yearly time series data from 1990 to 2016 accounted for 27 years. Data were collected from the Bank of Thailand database. Primary data were collected from the steakholders of agricultural product export policy of Thailand. Data analysis was applied descriptive statistics such as arithmetic mean, standard deviation. The forecasting of agricultural product was applied Mote Carlo Simulation technique as well as time trend analysis. In addition, the impact of agricultural product export on income and employment by applying econometric model while the estimated parameters were utilized the ordinary least square technique. The research results revealed that 1) agricultural product export value of Thailand from 1990 to 2016 was 338,959.5 Million Thai baht with its growth rate of 4.984 percent yearly, in addition, the forecasting of agricultural product export value of Thailand has increased but its growth rate has been declined 2) the impact of agricultural product export has positive impact on income in Thai economy, increasing in agricultural product export of Thailand by 1 percent would lead income increased by 0.0051 percent 3) the impact of agricultural product export has positive impact on employment in Thai economy, increasing in agricultural product export of Thailand by 1 percent would lead income increased by 0.079 percent and 4) in the future, agricultural product export policy would focused on finished or semi-finished agricultural product instead of raw material by applying technology and innovation in to make value added of agricultural product export. The public agricultural product export policy would support exporters in private sector in order to encourage them as agricultural exporters in Thailand.

Keywords: agricultural product export, income, employment, Thai economy

Procedia PDF Downloads 291
71 Examining the Drivers of Engagement in Social Media Brand Communities

Authors: Rania S. Hussein

Abstract:

This research mainly focuses on examining engagement in social media brand communities. Engagement in social media has become a main focus in literature affirming that the role of social media in our daily lives is growing. (Akman and Mishra, 2017;Prado-Gascó et al., 2017). Social media has also become a key medium for brand communication and brand building relationships(Frimpong and McLean,2018;Dimitriu and Guesalaga, 2017). Engagement on social media has become a main focus of many researchers who tried to understand this concept further and draw a link between engagement and various social media activities (Cvijikj and Michahelles;2013), Andre,2015; Wang et al., 2015). According to Felix et al. (2017), the internet and social media have provided better digital resources to improve brand loyalty and customer interactions, thus leading to social media engagement within brand communities. The aim of this research is to highlight the importance of social media and why it is important to maintain engagement within social media. While the term ‘engagement’ is widely used in scholarly literature, there isn’t a common consensus about what the term exactly entails, according to Kidd, (2011). On one hand, it was seen as something that includes factors such as participation, activation, empowerment, devotion, trust, and productivity (Zhang et al, andBenyoucef, M. (2016), ). Other scholars held different viewpoints. For example, Lim et al. (2015) has chosen to break down engagement into three types: operational engagement, emotional engagement, and relational engagement. Chandler and Lusch (2015) further studied engagement as a means to measure commitment to a brand. Fernandes&Remelhe (2016) had a more technical view, measuring engagement through comments, following, subscribing, sharing, enjoying, writing, etc., in the social media context. ustomer engagement has become a research focus for understanding how consumer relationships are developed, retained, and improved within a digital context. Based on previous literature, it is evident that many customer engagement related studies are limited to the interaction between firms and consumers on social media. There is a clear gap in the literature regarding consumer-to-consumer interaction and user-generated content and its significance. While some researchers, such as Alversia et al. (2016), touched upon the importance of customer-based engagement, a gap still remains: there is no consistent and well-tested method for defining the factors that affect consumer interaction. Moreover, few scholarly research papers such as (Case, 2019; Riley, 2020;Habibi, 2014) provided to assist businesses understand their customers' interaction habits as well as the best ways to develop customer loyalty. Additionally, the majority of research on brand pages concentrated on the drivers of Consumer engagement, with just a few studies example, Lamberton, Cc(2016), Poorrezaei, (2016). (Jayasingh, 2019), looking into the implications. This study focuses on understanding the concept of engagement and its importance, specifically engagement within social media brand communities. It examines drivers as well as consequences of engagement, including brand knowledge, brand trust, entertainment, and brand page interactivity. Brand engagement is also expected to affect brand loyalty and word of the mouth.

Keywords: engagement, social media, brand communities, drivers

Procedia PDF Downloads 144
70 Angiogenesis and Blood Flow: The Role of Blood Flow in Proliferation and Migration of Endothelial Cells

Authors: Hossein Bazmara, Kaamran Raahemifar, Mostafa Sefidgar, Madjid Soltani

Abstract:

Angiogenesis is formation of new blood vessels from existing vessels. Due to flow of blood in vessels, during angiogenesis, blood flow plays an important role in regulating the angiogenesis process. Multiple mathematical models of angiogenesis have been proposed to simulate the formation of the complicated network of capillaries around a tumor. In this work, a multi-scale model of angiogenesis is developed to show the effect of blood flow on capillaries and network formation. This model spans multiple temporal and spatial scales, i.e. intracellular (molecular), cellular, and extracellular (tissue) scales. In intracellular or molecular scale, the signaling cascade of endothelial cells is obtained. Two main stages in development of a vessel are considered. In the first stage, single sprouts are extended toward the tumor. In this stage, the main regulator of endothelial cells behavior is the signals from extracellular matrix. After anastomosis and formation of closed loops, blood flow starts in the capillaries. In this stage, blood flow induced signals regulate endothelial cells behaviors. In cellular scale, growth and migration of endothelial cells is modeled with a discrete lattice Monte Carlo method called cellular Pott's model (CPM). In extracellular (tissue) scale, diffusion of tumor angiogenic factors in the extracellular matrix, formation of closed loops (anastomosis), and shear stress induced by blood flow is considered. The model is able to simulate the formation of a closed loop and its extension. The results are validated against experimental data. The results show that, without blood flow, the capillaries are not able to maintain their integrity.

Keywords: angiogenesis, endothelial cells, multi-scale model, cellular Pott's model, signaling cascade

Procedia PDF Downloads 409
69 Inhibition of Influenza Replication through the Restrictive Factors Modulation by CCR5 and CXCR4 Receptor Ligands

Authors: Thauane Silva, Gabrielle do Vale, Andre Ferreira, Marilda Siqueira, Thiago Moreno L. Souza, Milene D. Miranda

Abstract:

The exposure of A(H1N1)pdm09-infected epithelial cells (HeLa) to HIV-1 viral particles, or its gp120, enhanced interferon-induced transmembrane protein (IFITM3) content, a viral restriction factor (RF), resulting in a decrease in influenza replication. The gp120 binds to CCR5 (R5) or CXCR4 (X4) cell receptors during HIV-1 infection. Then, it is possible that the endogenous ligands of these receptors also modulate the expression of IFITM3 and other cellular factors that restrict influenza virus replication. Thus, the aim of this study is to analyze the role of cellular receptors R5 and X4 in modulating RFs in order to inhibit the replication of the influenza virus. A549 cells were treated with 2x effective dose (ED50) of endogenous R5 or X4 receptor agonists, CCL3 (20 ng/ml), CCL4 (10 ng/ml), CCL5 (10 ng/ml) and CXCL12 (100 ng/mL) or exogenous agonists, gp120 Bal-R5, gp120 IIIB-X4 and its mutants (5 µg/mL). The interferon α (10 ng/mL) and oseltamivir (60 nM) were used as a control. After 24 h post agonists exposure, the cells were infected with virus influenza A(H3N2) at 2 MOI (multiplicity of infection) for 1 h. Then, 24 h post infection, the supernatant was harvested and, the viral titre was evaluated by qRT-PCR. To evaluate IFITM3 and SAM and HD domain containing deoxynucleoside triphosphate triphosphohydrolase 1 (SAMHD1) protein levels, A549 were exposed to agonists for 24 h, and the monolayer was lysed with Laemmli buffer for western blot (WB) assay or fixed for indirect immunofluorescence (IFI) assay. In addition to this, we analyzed other RFs modulation in A549, after 24 h post agonists exposure by customized RT² Profiler Polymerase Chain Reaction Array. We also performed a functional assay in which SAMHD1-knocked-down, by single-stranded RNA (siRNA), A549 cells were infected with A(H3N2). In addition, the cells were treated with guanosine to assess the regulatory role of dNTPs by SAMHD1. We found that R5 and X4 agonists inhibited influenza replication in 54 ± 9%. We observed a four-fold increase in SAMHD1 transcripts by RFs mRNA quantification panel. After 24 h post agonists exposure, we did not observe an increase in IFITM3 protein levels through WB or IFI assays, but we observed an upregulation up to three-fold in the protein content of SAMHD1, in A549 exposed to agonists. Besides this, influenza replication enhanced in 20% in cell cultures that SAMDH1 was knockdown. Guanosine treatment in cells exposed to R5 ligands further inhibited influenza virus replication, suggesting that the inhibitory mechanism may involve the activation of the SAMHD1 deoxynucleotide triphosphohydrolase activity. Thus, our data show for the first time a direct relationship of SAMHD1 and inhibition of influenza replication, and provides perspectives for new studies on the signaling modulation, through cellular receptors, to induce proteins of great importance in the control of relevant infections for public health.

Keywords: chemokine receptors, gp120, influenza, virus restriction factors

Procedia PDF Downloads 118
68 Towards an Eastern Philosophy of Religion: on the Contradictory Identity of Philosophy and Religion

Authors: Carlo Cogliati

Abstract:

The study of the relationship of philosophical reason with the religious domain has been very much a concern for many of the Western philosophical and theological traditions. In this essay, I will suggest a proposal for an Eastern philosophy of religion based on Nishida’s contradictory identity of the two: philosophy soku hi (is, and yes is not) religion. This will pose a challenge to the traditional Western contents and methods of the discipline. This paper aims to serve three purposes. First, I will critically assess Charlesworth’s typology of the relation between philosophy and religion in the West: philosophy as/for/against/about/after religion. I will also engage Harrison’s call for a global philosophy of religion(s) and argue that, although it expands the scope and the range of the questions to address, it is still Western in its method. Second, I will present Nishida’s logic of absolutely contradictory self-identity as the instrument to transcend the dichotomous pair of identity and contradiction: ‘A is A’ and ‘A is not A’. I will then explain how this ‘concrete’ logic of the East, as opposed to the ‘formal’ logic of the West, exhibits at best the bilateral dynamic relation between philosophy and religion. Even as Nishida argues for the non-separability of the two, he is also aware and committed to their mutual non-reducibility. Finally, I will outline the resulting new relation between God and creatures. Nishida in his philosophy soku hi religion replaces the traditional Western dualistic concept of God with the Eastern non-dualistic understanding of God as “neither transcendent nor immanent, and at the same time both transcendent and immanent.” God is therefore a self-identity of contradiction, nowhere and yet everywhere present in the world of creatures. God as absolute being is also absolute nothingness: the world of creatures is the expression of God’s absolute self-negation. The overreaching goal of this essay is to offer an alternative to traditional Western approaches to philosophy of religion based on Nishida’s logic of absolutely contradictory self-identity, as an example of philosophical and religious counter(influence). The resulting relationship between philosophy and religion calls for a revision of traditional concepts and methods. The outcome is not to reformulate the Eastern predilection to not sharply distinguish philosophical thought from religious enlightenment rather to bring together philosophy and religion in the place of identity and difference.

Keywords: basho, Nishida Kitaro, shukyotetsugaku, soku hi, zettai mujunteki jikodoitsu no ronri

Procedia PDF Downloads 174
67 The Influence of the State on the Internal Governance of Universities: A Comparative Study of Quebec (Canada) and Western Systems

Authors: Alexandre Beaupré-Lavallée, Pier-André Bouchard St-Amant, Nathalie Beaulac

Abstract:

The question of internal governance of universities is a political and scientific debate in the province of Quebec (Canada). Governments have called or set up inquiries on the subject on three separate occasions since the complete overhaul of the educational system in the 1960s: the Parent Commission (1967), the Angers Commission (1979) and the Summit on Higher Education (2013). All three produced reports that highlight the constant tug-of-war for authority and legitimacy within universities. Past and current research that cover Quebec universities have studied several aspects regarding internal governance: the structure as a whole or only some parts of it, the importance of certain key aspects such as collegiality or strategic planning, or of stakeholders, such as students or administrators. External governance has also been studied, though, as with internal governance, research so far as only covered well delineated topics like financing policies or overall impacts from wider societal changes such as New Public Management. The latter, NPM, is often brought up as a factor that influenced overall State policies like “steering-at-a-distance” or internal shifts towards “managerialism”. Yet, to the authors’ knowledge, there is not study that specifically maps how the Quebec State formally influences internal governance. In addition, most studies about the Quebec university system are not comparative in nature. This paper presents a portion of the results produced by a 2022- 2023 study that aims at filling these last two gaps in knowledge. Building on existing governmental, institutional, and scientific papers, we documented the legal and regulatory framework of the Quebec university system and of twenty-one other university systems in North America and Europe (2 in Canada, 2 in the USA, 16 in Europe, with the addition of the European Union as a distinct case). This allowed us to map the presence (or absence) of mandatory structures of governance enforced by States, as well as their composition. Then, using Clark’s “triangle of coordination”, we analyzed each system to assess the relative influences of the market, the State and the collegium upon the governance model put in place. Finally, we compared all 21 non-Quebec systems to characterize the province’s policies in an internal perspective. Preliminary findings are twofold. First, when all systems are placed on a continuum ranging from “no State interference in internal governance” to “State-run universities”, Quebec comes in the middle of the pack, albeit with a slight lean towards institutional freedom. When it comes to overall governance (like Boards and Senates), the dual nature of the Quebec system, with its public university and its coopted yet historically private (or ecclesiastic) institutions, in fact mimics the duality of all university systems. Second, however, is the sheer abundance of legal and regulatory mandates from the State that, while not expressly addressing internal governance, seems to require de facto modification of internal governance structure and dynamics to ensure institutional conformity with said mandates. This study is only a fraction of the research that is needed to better understand State-universities interactions regarding governance. We hope it will set the stage for future studies.

Keywords: internal governance, legislation, Quebec, universities

Procedia PDF Downloads 68
66 Study Secondary Particle Production in Carbon Ion Beam Radiotherapy

Authors: Shaikah Alsubayae, Gianluigi Casse, Carlos Chavez, Jon Taylor, Alan Taylor, Mohammad Alsulimane

Abstract:

Ensuring accurate radiotherapy with carbon therapy requires precise monitoring of radiation dose distribution within the patient's body. This monitoring is essential for targeted tumor treatment, minimizing harm to healthy tissues, and improving treatment effectiveness while lowering side effects. In our investigation, we employed a methodological approach to monitor secondary proton doses in carbon therapy using Monte Carlo simulations. Initially, Geant4 simulations were utilized to extract the initial positions of secondary particles formed during interactions between carbon ions and water. These particles included protons, gamma rays, alpha particles, neutrons, and tritons. Subsequently, we studied the relationship between the carbon ion beam and these secondary particles. Interaction Vertex Imaging (IVI) is valuable for monitoring dose distribution in carbon therapy. It provides details about the positions and amounts of secondary particles, particularly protons. The IVI method depends on charged particles produced during ion fragmentation to gather information about the range by reconstructing particle trajectories back to their point of origin, referred to as the vertex. In our simulations regarding carbon ion therapy, we observed a strong correlation between some secondary particles and the range of carbon ions. However, challenges arose due to the target's unique elongated geometry, which hindered the straightforward transmission of forward-generated protons. Consequently, the limited protons that emerged mostly originated from points close to the target entrance. The trajectories of fragments (protons) were approximated as straight lines, and a beam back-projection algorithm, using recorded interaction positions in Si detectors, was developed to reconstruct vertices. The analysis revealed a correlation between the reconstructed and actual positions.

Keywords: radiotherapy, carbon therapy, monitoring of radiation dose, interaction vertex imaging

Procedia PDF Downloads 59
65 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area

Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma

Abstract:

The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.

Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty

Procedia PDF Downloads 75
64 The Control of Wall Thickness Tolerance during Pipe Purchase Stage Based on Reliability Approach

Authors: Weichao Yu, Kai Wen, Weihe Huang, Yang Yang, Jing Gong

Abstract:

Metal-loss corrosion is a major threat to the safety and integrity of gas pipelines as it may result in the burst failures which can cause severe consequences that may include enormous economic losses as well as the personnel casualties. Therefore, it is important to ensure the corroding pipeline integrity and efficiency, considering the value of wall thickness, which plays an important role in the failure probability of corroding pipeline. Actually, the wall thickness is controlled during pipe purchase stage. For example, the API_SPEC_5L standard regulates the allowable tolerance of the wall thickness from the specified value during the pipe purchase. The allowable wall thickness tolerance will be used to determine the wall thickness distribution characteristic such as the mean value, standard deviation and distribution. Taking the uncertainties of the input variables in the burst limit-state function into account, the reliability approach rather than the deterministic approach will be used to evaluate the failure probability. Moreover, the cost of pipe purchase will be influenced by the allowable wall thickness tolerance. More strict control of the wall thickness usually corresponds to a higher pipe purchase cost. Therefore changing the wall thickness tolerance will vary both the probability of a burst failure and the cost of the pipe. This paper describes an approach to optimize the wall thickness tolerance considering both the safety and economy of corroding pipelines. In this paper, the corrosion burst limit-state function in Annex O of CSAZ662-7 is employed to evaluate the failure probability using the Monte Carlo simulation technique. By changing the allowable wall thickness tolerance, the parameters of the wall thickness distribution in the limit-state function will be changed. Using the reliability approach, the corresponding variations in the burst failure probability will be shown. On the other hand, changing the wall thickness tolerance will lead to a change in cost in pipe purchase. Using the variation of the failure probability and pipe cost caused by changing wall thickness tolerance specification, the optimal allowable tolerance can be obtained, and used to define pipe purchase specifications.

Keywords: allowable tolerance, corroding pipeline segment, operation cost, production cost, reliability approach

Procedia PDF Downloads 381
63 SiO2-Ag+Chlorex vs SilverSulfaDiazine: An 'in vitro' and 'in vivo' Silver Challenge

Authors: Roberto Cassino, Valeria Dissette, Carlo Alberto Bignozzi, Daniele Pazzi

Abstract:

Background and Aims: The aim of this work was to investigate, both ‘in vitro’ and ‘in vivo’, if the new SCX technology (SiO2-Ag+Chlorex) can easily defeat infections and it is really more effective than SSD (SilverSulfaDiazine). ‘In vitro’ methods: we tested ‘in vitro’ the effectiveness of both silver materials using a pool of 5 strains: Pseudomonas Aeruginosa, Staphylococcus aureus, Escherichia Coli, Enterococcus hirae and Candida Albicans. 100 µl of this pool have been seeded on Petri dishes and kept for 24 hours in incubation at 37 C°. ‘In vivo’ methods: we enrolled patients with multiple infectious chronic wounds (according with cutting & harding criteria for infection); after a qualitative evaluation of the wounds bacterial population, taking a sample by plug, we included in the study 6 patients for a total of 10 wounds, infected by one or more of the microorganisms used for the ‘in vitro’ test. The protocol consisted of a treatment with a spray powder of SSD every 48 hours for 14 days; in case of worsening we should have to start a new treatment with a spray powder containing silicon dioxide, ionic silver and chlorexidine (SiO2-Ag+Chlorex) every 48 hours for 14 days. We evaluated the number of clinical signs of infection and the disappearance or not of the wound edge erithema. ‘In vitro’ results: SSD demonstrated a wide zone of inhibition within 24 hours, but after 5 days there was no more signs of inhibition; on the contrary SCX had a good inhibition ring that lasted more than 5 days. ‘In vivo’ results: all wounds treated with SSD got worse; the signs of infection increased and the wound edge erithema did not disappear. According with the protocol, we treated then all wounds with SCX and they all improved within the period of observation with complete disappearance of clinical signs of infection and no more wound edge erithema. Conclusions: the study demonstrated the effectiveness of SiO2-Ag+Chlorex, especially in terms of long lasting antimicrobial action. We had the same results ‘in vitro’, so that there has been a perfect correspondence between the laboratory outcomes and the clinical ones.

Keywords: chronic wounds, infections, ionic silver, SSD

Procedia PDF Downloads 312
62 Destructive and Nondestructive Characterization of Advanced High Strength Steels DP1000/1200

Authors: Carla M. Machado, André A. Silva, Armando Bastos, Telmo G. Santos, J. Pamies Teixeira

Abstract:

Advanced high-strength steels (AHSS) are increasingly being used in automotive components. The use of AHSS sheets plays an important role in reducing weight, as well as increasing the resistance to impact in vehicle components. However, the large-scale use of these sheets becomes more difficult due to the limitations during the forming process. Such limitations are due to the elastically driven change of shape of a metal sheet during unloading and following forming, known as the springback effect. As the magnitude of the springback tends to increase with the strength of the material, it is among the most worrisome problems in the use of AHSS steels. The prediction of strain hardening, especially under non-proportional loading conditions, is very limited due to the lack of constitutive models and mainly due to very limited experimental tests. It is very clear from the literature that in experimental terms there is not much work to evaluate deformation behavior under real conditions, which implies a very limited and scarce development of mathematical models for these conditions. The Bauschinger effect is also fundamental to the difference between kinematic and isotropic hardening models used to predict springback in sheet metal forming. It is of major importance to deepen the phenomenological knowledge of the mechanical and microstructural behavior of the materials, in order to be able to reproduce with high fidelity the behavior of extension of the materials by means of computational simulation. For this, a multi phenomenological analysis and characterization are necessary to understand the various aspects involved in plastic deformation, namely the stress-strain relations and also the variations of electrical conductivity and magnetic permeability associated with the metallurgical changes due to plastic deformation. Aiming a complete mechanical-microstructural characterization, uniaxial tensile tests involving successive cycles of loading and unloading were performed, as well as biaxial tests such as the Erichsen test. Also, nondestructive evaluation comprising eddy currents to verify microstructural changes due to plastic deformation and ultrasonic tests to evaluate the local variations of thickness were made. The material parameters for the stable yield function and the monotonic strain hardening were obtained using uniaxial tension tests in different material directions and balanced biaxial tests. Both the decrease of the modulus of elasticity and Bauschinger effect were determined through the load-unload tensile tests. By means of the eddy currents tests, it was possible to verify changes in the magnetic permeability of the material according to the different plastically deformed areas. The ultrasonic tests were an important aid to quantify the local plastic extension. With these data, it is possible to parameterize the different models of kinematic hardening to better approximate the results obtained by simulation with the experimental results, which are fundamental for the springback prediction of the stamped parts.

Keywords: advanced high strength steel, Bauschinger effect, sheet metal forming, springback

Procedia PDF Downloads 214
61 Trip Reduction in Turbo Machinery

Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto

Abstract:

Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBT

Keywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start

Procedia PDF Downloads 60
60 Analysis of the Evolution of Techniques and Review in Cleft Surgery

Authors: Tomaz Oliveira, Rui Medeiros, André Lacerda

Abstract:

Introduction: Cleft lip and/or palate are the most frequent forms of congenital craniofacial anomalies, affecting mainly the middle third of the face and manifesting by functional and aesthetic changes. Bilateral cleft lip represents a reconstructive surgical challenge, not only for the labial component but also for the associated nasal deformation. Recently, the paradigm of the approach to this pathology has changed, placing the focus on muscle reconstruction and anatomical repositioning of the nasal cartilages in order to obtain the best aesthetic and functional results. The aim of this study is to carry out a systematic review of the surgical approach to bilateral cleft lip, retrospectively analyzing the case series of Plastic Surgery Service at Hospital Santa Maria (Lisbon, Portugal) regarding this pathology, the global assessment of the characteristics of the operated patients and the study of the different surgical approaches and their complications in the last 20 years. Methods: The present work demonstrates a retrospective and descriptive study of patients who underwent at least one reconstructive surgery for cleft lip and/or palate, in the CPRE service of the HSM, in the period between January 1 of 1997 and December 31 of 2017, in which the data relating to 361 individuals were analyzed who, after applying the exclusion criteria, constituted a sample of 212 participants. The variables analyzed were the year of the first surgery, gender, age, type of orofacial cleft, surgical approach, and its complications. Results: There was a higher overall prevalence in males, with cleft lip and cleft palate occurring in greater proportion in males, with the cleft palate being more common in females. The most frequently recorded malformation was cleft lip and palate, which is complete in most cases. Regarding laterality, alterations with a unilateral labial component were the most commonly observed, with the left lip being described as the most affected. It was found that the vast majority of patients underwent primary intervention up to 12 months of age. The surgical techniques used in the approach to this pathology showed an important chronological variation over the years. Discussion: Cleft lip and/or palate is a medical condition associated with high aesthetic and functional morbidity, which requires early treatment in order to optimize the long-term outcome. The existence of a nasolabial component and its respective surgical correction plays a central role in the treatment of this pathology. The high rates of post-surgical complications and unconvincing aesthetic results have motivated an evolution of the surgical technique, increasingly evident in recent years, allowing today to achieve satisfactory aesthetic results, even in bilateral cleft lip with high deformation complexity. The introduction of techniques that favor nasolabial reconstruction based on anatomical principles has been producing increasingly convincing results. The analyzed sample shows that most of the results obtained in this study are, in general, compatible with the results published in the literature. Conclusion: This work showed that the existence of small variations in the surgical technique can bring significant improvements in the functional and aesthetic results in the treatment of bilateral cleft lip.

Keywords: cleft lip, palate lip, congenital abnormalities, cranofacial malformations

Procedia PDF Downloads 98
59 Welfare and Sustainability in Beef Cattle Production on Tropical Pasture

Authors: Andre Pastori D'Aurea, Lauriston Bertelli Feranades, Luis Eduardo Ferreira, Leandro Dias Pinto, Fabiana Ayumi Shiozaki

Abstract:

The aim of this study was to improve the production of beef cattle on tropical pasture without harming this environment. On tropical pastures, cattle's live weight gain is lower than feedlot, and forage production is seasonable, changing from season to season. Thus, concerned with sustainable livestock production, the Premix Company has developed strategies to improve the production of beef cattle on tropical pasture to ensure sustainability of welfare and production. There are two important principles in this productivity system: 1) increase individual gains with use of better supplementation and 2) increase the productivity units with better forage quality like corn silage or other forms of forage conservations, actually used only in winter, and adding natural additives in the diet. This production system was applied from June 2017 to May 2018 in the Research Center of Premix Company, Patrocínio Paulista, São Paulo State, Brazil. The area used had 9 hectares of pasture of Brachiaria brizantha. 36 steers Nellore were evaluated for one year. The initial weight was 253 kg. The parameters used were daily average gain and gain per area. This indicated the corrections to be made and helped design future fertilization. In this case, we fertilized the pasture with 30 kg of nitrogen per animal divided into two parts. The diet was pasture and protein-energy supplements (0.4% of live weight). The supplement used was added with natural additive Fator P® – Premix Company). Fator P® is an additive composed by amino acids (lysine, methionine and tyrosine, 16400, 2980 and 3000 mg.kg-1 respectively), minerals, probiotics (Saccharomyces cerevisiae, 7 x 10E8 CFU.kg-1) and essential fatty acids (linoleic and oleic acids, 108.9 and 99g.kg-1 respectively). Due to seasonal changes, in the winter we supplemented the diet by increasing the offer of forage, supplementing with maize silage. It was offered 1% of live weight in silage corn and 0.4% of the live weight in protein-energetic supplements with additive Fator P ®. At the end of the period, the productivity was calculated by summing the individual gains for the area used. The average daily gain of the animals were 693 grams per day and was produced 1.005 kg /hectare/year. This production is about 8 times higher than the average of Brazilian meat national production. To succeed in this project, it is necessary to increase the gains per area, so it is necessary to increase the capacity per area. Pasture management is very important to the project's success because the dietary decisions were taken from the quantity and quality of the forage. We, therefore, recommend the use of animals in the growth phase because the response to supplementation is greater in that phase and we can allocate more animals per area. This system's carbon footprint reduces emissions by 61.2 percent compared to the Brazilian average. This beef cattle production system can be efficient and environmentally friendly to the natural. Another point is that bovines will benefit from their natural environment without competing or having an impact on human food production.

Keywords: cattle production, environment, pasture, sustainability

Procedia PDF Downloads 129
58 South African Multiple Deprivation-Concentration Index Quantiles Differentiated by Components of Success and Impediment to Tuberculosis Control Programme Using Mathematical Modelling in Rural O. R. Tambo District Health Facilities

Authors: Ntandazo Dlatu, Benjamin Longo-Mbenza, Andre Renzaho, Ruffin Appalata, Yolande Yvonne Valeria Matoumona Mavoungou, Mbenza Ben Longo, Kenneth Ekoru, Blaise Makoso, Gedeon Longo Longo

Abstract:

Background: The gap between complexities related to the integration of Tuberculosis /HIV control and evidence-based knowledge motivated the initiation of the study. Therefore, the objective of this study was to explore correlations between national TB management guidelines, multiple deprivation indexes, quantiles, components and levels of Tuberculosis control programme using mathematical modeling in rural O.R. Tambo District Health Facilities, South Africa. Methods: The study design used mixed secondary data analysis and cross-sectional analysis between 2009 and 2013 across O.R Tambo District, Eastern Cape, South Africa using univariate/ bivariate analysis, linear multiple regression models, and multivariate discriminant analysis. Health inequalities indicators and component of an impediment to the tuberculosis control programme were evaluated. Results: In total, 62 400 records for TB notification were analyzed for the period 2009-2013. There was a significant but negative between Financial Year Expenditure (r= -0.894; P= 0.041) Seropositive HIV status(r= -0.979; P= 0.004), Population Density (r = -0.881; P= 0.048) and the number of TB defaulter in all TB cases. It was shown unsuccessful control of TB management program through correlations between numbers of new PTB smear positive, TB defaulter new smear-positive, TB failure all TB, Pulmonary Tuberculosis case finding index and deprivation-concentration-dispersion index. It was shown successful TB program control through significant and negative associations between declining numbers of death in co-infection of HIV and TB, TB deaths all TB and SMIAD gradient/ deprivation-concentration-dispersion index. The multivariate linear model was summarized by unadjusted r of 96%, adjusted R2 of 95 %, Standard Error of estimate of 0.110, R2 changed of 0.959 and significance for variance change for P=0.004 to explain the prediction of TB defaulter in all TB with equation y= 8.558-0.979 x number of HIV seropositive. After adjusting for confounding factors (PTB case finding the index, TB defaulter new smear-positive, TB death in all TB, TB defaulter all TB, and TB failure in all TB). The HIV and TB death, as well as new PTB smear positive, were identified as the most important, significant, and independent indicator to discriminate most deprived deprivation index far from other deprivation quintiles 2-5 using discriminant analysis. Conclusion: Elimination of poverty such as overcrowding, lack of sanitation and environment of highest burden of HIV might end the TB threat in O.R Tambo District, Eastern Cape, South Africa. Furthermore, ongoing adequate budget comprehensive, holistic and collaborative initiative towards Sustainable Developmental Goals (SDGs) is necessary for complete elimination of TB in poor O.R Tambo District.

Keywords: tuberculosis, HIV/AIDS, success, failure, control program, health inequalities, South Africa

Procedia PDF Downloads 151
57 A Study of Secondary Particle Production from Carbon Ion Beam for Radiotherapy

Authors: Shaikah Alsubayae, Gianluigi Casse, Carlos Chavez, Jon Taylor, Alan Taylor, Mohammad Alsulimane

Abstract:

Achieving precise radiotherapy through carbon therapy necessitates the accurate monitoring of radiation dose distribution within the patient's body. This process is pivotal for targeted tumor treatment, minimizing harm to healthy tissues, and enhancing overall treatment effectiveness while reducing the risk of side effects. In our investigation, we adopted a methodological approach to monitor secondary proton doses in carbon therapy using Monte Carlo (MC) simulations. Initially, Geant4 simulations were employed to extract the initial positions of secondary particles generated during interactions between carbon ions and water, including protons, gamma rays, alpha particles, neutrons, and tritons. Subsequently, we explored the relationship between the carbon ion beam and these secondary particles. Interaction vertex imaging (IVI) proves valuable for monitoring dose distribution during carbon therapy, providing information about secondary particle locations and abundances, particularly protons. The IVI method relies on charged particles produced during ion fragmentation to gather range information by reconstructing particle trajectories back to their point of origin, known as the vertex. In the context of carbon ion therapy, our simulation results indicated a strong correlation between some secondary particles and the range of carbon ions. However, challenges arose due to the unique elongated geometry of the target, hindering the straightforward transmission of forward-generated protons. Consequently, the limited protons that did emerge predominantly originated from points close to the target entrance. Fragment (protons) trajectories were approximated as straight lines, and a beam back-projection algorithm, utilizing interaction positions recorded in Si detectors, was developed to reconstruct vertices. The analysis revealed a correlation between the reconstructed and actual positions.

Keywords: radiotherapy, carbon therapy, monitor secondary proton doses, interaction vertex imaging

Procedia PDF Downloads 63
56 Organ Dose Calculator for Fetus Undergoing Computed Tomography

Authors: Choonsik Lee, Les Folio

Abstract:

Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.

Keywords: computed tomography, fetal dose, pregnant women, radiation dose

Procedia PDF Downloads 123
55 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 337
54 Study of the Uncertainty Behaviour for the Specific Total Enthalpy of the Hypersonic Plasma Wind Tunnel Scirocco at Italian Aerospace Research Center

Authors: Adolfo Martucci, Iulian Mihai

Abstract:

By means of the expansion through a Conical Nozzle and the low pressure inside the Test Chamber, a large hypersonic stable flow takes place for a duration of up to 30 minutes. Downstream the Test Chamber, the diffuser has the function of reducing the flow velocity to subsonic values, and as a consequence, the temperature increases again. In order to cool down the flow, a heat exchanger is present at the end of the diffuser. The Vacuum System generates the necessary vacuum conditions for the correct hypersonic flow generation, and the DeNOx system, which follows the Vacuum System, reduces the nitrogen oxide concentrations created inside the plasma flow behind the limits imposed by Italian law. This very large, powerful, and complex facility allows researchers and engineers to reproduce entire re-entry trajectories of space vehicles into the atmosphere. One of the most important parameters for a hypersonic flowfield representative of re-entry conditions is the specific total enthalpy. This is the whole energy content of the fluid, and it represents how severe could be the conditions around a spacecraft re-entering from a space mission or, in our case, inside a hypersonic wind tunnel. It is possible to reach very high values of enthalpy (up to 45 MJ/kg) that, together with the large allowable size of the models, represent huge possibilities for making on-ground experiments regarding the atmospheric re-entry field. The maximum nozzle exit section diameter is 1950 mm, where values of Mach number very much higher than 1 can be reached. The specific total enthalpy is evaluated by means of a number of measurements, each of them concurring with its value and its uncertainty. The scope of the present paper is the evaluation of the sensibility of the uncertainty of the specific total enthalpy versus all the parameters and measurements involved. The sensors that, if improved, could give the highest advantages have so been individuated. Several simulations in Python with the METAS library and by means of Monte Carlo simulations are presented together with the obtained results and discussions about them.

Keywords: hypersonic, uncertainty, enthalpy, simulations

Procedia PDF Downloads 80
53 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data

Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann

Abstract:

Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.

Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers

Procedia PDF Downloads 194
52 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction

Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach

Abstract:

X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.

Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast

Procedia PDF Downloads 243
51 Computational Study of Composite Films

Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova

Abstract:

Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.

Keywords: composite films, computer modelling, image analysis, nanocomposite films

Procedia PDF Downloads 377
50 Performance Improvement of Long-Reach Optical Access Systems Using Hybrid Optical Amplifiers

Authors: Shreyas Srinivas Rangan, Jurgis Porins

Abstract:

The internet traffic has increased exponentially due to the high demand for data rates by the users, and the constantly increasing metro networks and access networks are focused on improving the maximum transmit distance of the long-reach optical networks. One of the common methods to improve the maximum transmit distance of the long-reach optical networks at the component level is to use broadband optical amplifiers. The Erbium Doped Fiber Amplifier (EDFA) provides high amplification with low noise figure but due to the characteristics of EDFA, its operation is limited to C-band and L-band. In contrast, the Raman amplifier exhibits a wide amplification spectrum, and negative noise figure values can be achieved. To obtain such results, high powered pumping sources are required. Operating Raman amplifiers with such high-powered optical sources may cause fire hazards and it may damage the optical system. In this paper, we implement a hybrid optical amplifier configuration. EDFA and Raman amplifiers are used in this hybrid setup to combine the advantages of both EDFA and Raman amplifiers to improve the reach of the system. Using this setup, we analyze the maximum transmit distance of the network by obtaining a correlation diagram between the length of the single-mode fiber (SMF) and the Bit Error Rate (BER). This hybrid amplifier configuration is implemented in a Wavelength Division Multiplexing (WDM) system with a BER of 10⁻⁹ by using NRZ modulation format, and the gain uniformity noise ratio (signal-to-noise ratio (SNR)), the efficiency of the pumping source, and the optical signal gain efficiency of the amplifier are studied experimentally in a mathematical modelling environment. Numerical simulations were implemented in RSoft OptSim simulation software based on the nonlinear Schrödinger equation using the Split-Step method, the Fourier transform, and the Monte Carlo method for estimating BER.

Keywords: Raman amplifier, erbium doped fibre amplifier, bit error rate, hybrid optical amplifiers

Procedia PDF Downloads 52
49 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 58
48 Multifunctionality of Cover Crops in South Texas: Looking at Multiple Benefits of Cover Cropping on Small Farms in a Subtropical Climate

Authors: Savannah Rugg, Carlo Moreno, Pushpa Soti, Alexis Racelis

Abstract:

Situated in deep South Texas, the Lower Rio Grande Valley (LRGV) is considered one the most productive agricultural regions in the southern US. With the highest concentration of organic farms in the state (Hidalgo county), the LRGV has a strong potential to be leaders in sustainable agriculture. Finding management practices that comply with organic certification and increase the health of the agroecosytem and the farmers working the land is increasingly pertinent. Cover cropping, or the intentional planting of non-cash crop vegetation, can serve multiple functions in an agroecosystem by decreasing environmental pollutants that originate from the agroecosystem, reducing inputs needed for crop production, and potentially decreasing on-farm costs for farmers—overall increasing the sustainability of the farm. Use of cover crops on otherwise fallow lands have shown to enhance ecosystem services such as: attracting native beneficial insects (pollinators), increase nutrient availability in topsoil, prevent nutrient leaching, increase soil organic matter, and reduces soil erosion. In this study, four cover crops (Lablab, Sudan Grass, Sunn Hemp, and Pearl Millet) were analyzed in the subtropical region of south Texas to see how their multiple functions enhance ecosystem services. The four cover crops were assessed to see their potential to harbor native insects, their potential to increase soil nitrogen, to increase soil organic matter, and to suppress weeds. The preliminary results suggest that these subtropical varieties of cover crops have potential to enhance ecosystem services on agricultural land in the RGV by increasing soil organic matter (in all varieties), increasing nitrogen in topsoil (Lablab, Sunn Hemp), and reducing weeds (Sudan Grass).

Keywords: cover crops, ecosystem services, subtropical agriculture, sustainable agriculture

Procedia PDF Downloads 288
47 A Study of Female Casino Dealers' Job Stress and Job Satisfaction: The Case of Macau

Authors: Xinrong Zong, Tao Zhang

Abstract:

Macau is known as the Oriental Monte Carlo and its economy depends on gambling heavily. The dealer is the key position of the gambling industry, at the end of the fourth quarter of 2015, there were over 24,000 dealers among the 56,000 full-time employees in gambling industry. More than half of dealers were female. The dealer is also called 'Croupier', the main responsibilities of them are shuffling, dealing, processing chips, rolling dice game and inspecting play. Due to the limited land and small population of Macao, the government has not allowed hiring foreign domestic dealers since Macao developed temporary gambling industry. Therefore, local dealers enjoy special advantages but also bear the high stresses from work. From the middle of last year, with the reduced income of gambling, and the decline of mainland gamblers as well as VIP lounges, the working time of dealers increased greatly. Thus, many problems occurred in this condition, such as the rise of working pressures, psychological pressures and family-responsibility pressures, which may affect job satisfaction as well. Because of the less research of dealer satisfaction, and a lack of standing on feminine perspective to analyze female dealers, this study will focus on investigating the relationship between working pressure and job satisfaction from feminine view. Several issues will be discussed specifically: firstly, to understand current situation of the working pressures and job satisfactions of female dealers in different ages; secondly, to research if there is any relevance between working pressures and job satisfactions of female dealers in different ages; thirdly, to find out the relationship between dealers' working pressures and job satisfactions in different ages. This paper combined qualitative approach with quantitative approach selected samples by convenient sampling. The research showed the female dealers from diverse ages have different kinds of working pressures; second, job satisfactions of the female dealers in different ages are dissimilar; moreover, there is negative correlation between working pressure and job satisfaction of female dealer in different ages' groups; last but not the least, working pressure has a significant negative impact on job satisfaction. The research result will provide a reference value for the Macau gambling business. It is a pattern to improve dealers' working environment, to increase employees' job satisfaction, as well as to offer tourists a better service, which can help to attract more and more visitors from a good image of Macau gaming and tourism.

Keywords: female dealers, job satisfaction, working pressure, Macau

Procedia PDF Downloads 289