Search results for: statistical parameters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11918

Search results for: statistical parameters

818 Development of an Integrated Reaction Design for the Enzymatic Production of Lactulose

Authors: Natan C. G. Silva, Carlos A. C. Girao Neto, Marcele M. S. Vasconcelos, Luciana R. B. Goncalves, Maria Valderez P. Rocha

Abstract:

Galactooligosaccharides (GOS) are sugars with prebiotic function that can be synthesized chemically or enzymatically, and this last one can be promoted by the action of β-galactosidases. In addition to favoring the transgalactosylation reaction to form GOS, these enzymes can also catalyze the hydrolysis of lactose. A highly studied type of GOS is lactulose because it presents therapeutic properties and is a health promoter. Among the different raw materials that can be used to produce lactulose, whey stands out as the main by-product of cheese manufacturing, and its discarded is harmful to the environment due to the residual lactose present. Therefore, its use is a promising alternative to solve this environmental problem. Thus, lactose from whey is hydrolyzed into glucose and galactose by β-galactosidases. However, in order to favor the transgalactosylation reaction, the medium must contain fructose, due this sugar reacts with galactose to produce lactulose. Then, the glucose-isomerase enzyme can be used for this purpose, since it promotes the isomerization of glucose into fructose. In this scenario, the aim of the present work was first to develop β-galactosidase biocatalysts of Kluyveromyces lactis and to apply it in the integrated reactions of hydrolysis, isomerization (with the glucose-isomerase from Streptomyces murinus) and transgalactosylation reaction, using whey as a substrate. The immobilization of β-galactosidase in chitosan previously functionalized with 0.8% glutaraldehyde was evaluated using different enzymatic loads (2, 5, 7, 10, and 12 mg/g). Subsequently, the hydrolysis and transgalactosylation reactions were studied and conducted at 50°C, 120 RPM for 20 minutes. In parallel, the isomerization of glucose into fructose was evaluated under conditions of 70°C, 750 RPM for 90 min. After, the integration of the three processes for the production of lactulose was investigated. Among the evaluated loads, 7 mg/g was chosen because the best activity of the derivative (44.3 U/g) was obtained, being this parameter determinant for the reaction stages. The other parameters of immobilization yield (87.58%) and recovered activity (46.47%) were also satisfactory compared to the other conditions. Regarding the integrated process, 94.96% of lactose was converted, achieving 37.56 g/L and 37.97 g/L of glucose and galactose, respectively. In the isomerization step, conversion of 38.40% of glucose was observed, obtaining a concentration of 12.47 g/L fructose. In the transgalactosylation reaction was produced 13.15 g/L lactulose after 5 min. However, in the integrated process, there was no formation of lactulose, but it was produced other GOS at the same time. The high galactose concentration in the medium probably favored the reaction of synthesis of these other GOS. Therefore, the integrated process proved feasible for possible production of prebiotics. In addition, this process can be economically viable due to the use of an industrial residue as a substrate, but it is necessary a more detailed investigation of the transgalactosilation reaction.

Keywords: beta-galactosidase, glucose-isomerase, galactooligosaccharides, lactulose, whey

Procedia PDF Downloads 119
817 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture

Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger

Abstract:

3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.

Keywords: 3D woven composites, compression, preforms, textile composites

Procedia PDF Downloads 119
816 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory

Authors: Liqin Zhang, Liang Yan

Abstract:

This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.

Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization

Procedia PDF Downloads 110
815 Comparative Proteomic Profiling of Planktonic and Biofilms from Staphylococcus aureus Using Tandem Mass Tag-Based Mass Spectrometry

Authors: Arifur Rahman, Ardeshir Amirkhani, Honghua Hu, Mark Molloy, Karen Vickery

Abstract:

Introduction and Objectives: Staphylococcus aureus and coagulase-negative staphylococci comprises approximately 65% of infections associated with medical devices and are well known for their biofilm formatting ability. Biofilm-related infections are extremely difficult to eradicate owing to their high tolerance to antibiotics and host immune defences. Currently, there is no efficient method for early biofilm detection. A better understanding to enable detection of biofilm specific proteins in vitro and in vivo can be achieved by studying planktonic and different growth phases of biofilms using a proteome analysis approach. Our goal was to construct a reference map of planktonic and biofilm associated proteins of S. aureus. Methods: S. aureus reference strain (ATCC 25923) was used to grow 24 hours planktonic, 3-day wet biofilm (3DWB), and 12-day wet biofilm (12DWB). Bacteria were grown in tryptic soy broth (TSB) liquid medium. Planktonic growth was used late logarithmic bacteria, and the Centres for Disease Control (CDC) biofilm reactor was used to grow 3 days, and 12-day hydrated biofilms, respectively. Samples were subjected to reduction, alkylation and digestion steps prior to Multiplex labelling using Tandem Mass Tag (TMT) 10-plex reagent (Thermo Fisher Scientific). The labelled samples were pooled and fractionated by high pH RP-HPLC which followed by loading of the fractions on a nanoflow UPLC system (Eksigent UPLC system, AB SCIEX). Mass spectrometry (MS) data were collected on an Orbitrap Elite (Thermo Fisher Scientific) Mass Spectrometer. Protein identification and relative quantitation of protein levels were performed using Proteome Discoverer (version 1.3, Thermo Fisher Scientific). After the extraction of protein ratios with Proteome Discoverer, additional processing, and statistical analysis was done using the TMTPrePro R package. Results and Discussion: The present study showed that a considerable proteomic difference exists among planktonic and biofilms from S. aureus. We identified 1636 total extracellular secreted proteins, of which 350 and 137 proteins of 3DWB and 12DWB showed significant abundance variation from planktonic preparation, respectively. Of these, simultaneous up-regulation in between 3DWB and 12DWB proteins such as extracellular matrix-binding protein ebh, enolase, transketolase, triosephosphate isomerase, chaperonin, peptidase, pyruvate kinase, hydrolase, aminotransferase, ribosomal protein, acetyl-CoA acetyltransferase, DNA gyrase subunit A, glycine glycyltransferase and others we found in this biofilm producer. On the contrary, simultaneous down-regulation in between 3DWB and 12DWB proteins such as alpha and delta-hemolysin, lipoteichoic acid synthase, enterotoxin I, serine protease, lipase, clumping factor B, regulatory protein Spx, phosphoglucomutase, and others also we found in this biofilm producer. In addition, we also identified a big percentage of hypothetical proteins including unique proteins. Therefore, a comprehensive knowledge of planktonic and biofilm associated proteins identified by S. aureus will provide a basis for future studies on the development of vaccines and diagnostic biomarkers. Conclusions: In this study, we constructed an initial reference map of planktonic and various growth phase of biofilm associated proteins which might be helpful to diagnose biofilm associated infections.

Keywords: bacterial biofilms, CDC bioreactor, S. aureus, mass spectrometry, TMT

Procedia PDF Downloads 154
814 Assessing the Lifestyle Factors, Nutritional and Socioeconomic Status Associated with Peptic Ulcer Disease: A Cross-Sectional Study among Patients at the Tema General Hospital of Ghana

Authors: Marina Aferiba Tandoh, Elsie Odei

Abstract:

Peptic Ulcer Disease (PUD) is amongst the commonest gastrointestinal problems that require emergency treatment in order to preserve life. The prevalence of PUD is increasing within the Ghanaian population, deepening the need to identify factors that are associated with its occurrence. This cross-sectional study assessed the nutritional status, socioeconomic and lifestyle factors associated with PUD among patients attending the Out-Patient Department of the Tema General Hospital of Ghana. A food frequency questionnaire and a three-day, 24-hour recall were used to assess the dietary intakes of study participants. A standardized questionnaire was used to obtain information on the participants’ socio-demographic characteristics, lifestyle as well as medical history. The data was analyzed using SPSS version 22. The mean age of study participants was 32.8±15.41years. Females were significantly higher (61.4%) than males (38.6%) (p < 0.001). All participants had received some form of education, with tertiary education being the highest (52.6%). The majority of them managed their condition with medications only (86%), while 10.5% managed it with a combination of medications and diet. The rest were either by dietary counseling only (1.8%), or surgery only (1.8%). or herbal medicines (29.3%), which were made from home (7.2%) or bought from a medical store (10.8%). Most of the participants experienced a recurrence of the disease (42.1%). For those who had ever experienced recurrences of the disease, it happened when they ate acidic foods (1.8%), ate bigger portions (1.8%), starved themselves (1.8%), or were stressed (1.8%). Others also had triggers when they took certain medications (1.8%) or ate too much pepper (1.8%). About 49% of the participants were either overweight or obese with a recurrence of PUD (p>0.05). Obese patients had the highest rate of PUD recurrences (41%). Drinking alcohol was significantly associated with the recurrence of PUD (χ2= 5.243, p=0.026). Other lifestyles, such as weed smoking, fasting, and use of herbal medicine and NSAIDs did not have any significant association with the disease recurrence. There was no significant correlation between the various dietary patterns and anthropometric parameters except dietary pattern one (salty snacks, regular soft drinks, milk, sweetened yogurt, ice cream, and cooked vegetables), which had a positive correlation with weight (p=0.002) and BMI (p=0.038). PUD patients should target weight reduction actions and reduce alcohol intake as measures to control the recurrence of the disease. Nutrition Education among this population must be promoted to minimize the recurrence of PUD.

Keywords: Dietary patterns, lifestyle factors, nutritional status, peptic ulcer disease

Procedia PDF Downloads 66
813 Micro-Milling Process Development of Advanced Materials

Authors: M. A. Hafiz, P. T. Matevenga

Abstract:

Micro-level machining of metals is a developing field which has shown to be a prospective approach to produce features on the parts in the range of a few to a few hundred microns with acceptable machining quality. It is known that the mechanics (i.e. the material removal mechanism) of micro-machining and conventional machining have significant differences due to the scaling effects associated with tool-geometry, tool material and work piece material characteristics. Shape memory alloys (SMAs) are those metal alloys which display two exceptional properties, pseudoelasticity and the shape memory effect (SME). Nickel-titanium (NiTi) alloys are one of those unique metal alloys. NiTi alloys are known to be difficult-to-cut materials specifically by using conventional machining techniques due to their explicit properties. Their high ductility, high amount of strain hardening, and unusual stress–strain behaviour are the main properties accountable for their poor machinability in terms of tool wear and work piece quality. The motivation of this research work was to address the challenges and issues of micro-machining combining with those of machining of NiTi alloy which can affect the desired performance level of machining outputs. To explore the significance of range of cutting conditions on surface roughness and tool wear, machining tests were conducted on NiTi. Influence of different cutting conditions and cutting tools on surface and sub-surface deformation in work piece was investigated. Design of experiments strategy (L9 Array) was applied to determine the key process variables. The dominant cutting parameters were determined by analysis of variance. These findings showed that feed rate was the dominant factor on surface roughness whereas depth of cut found to be dominant factor as far as tool wear was concerned. The lowest surface roughness was achieved at the feed rate of equal to the cutting edge radius where as the lowest flank wear was observed at lowest depth of cut. Repeated machining trials have yet to be carried out in order to observe the tool life, sub-surface deformation and strain induced hardening which are also expecting to be amongst the critical issues in micro machining of NiTi. The machining performance using different cutting fluids and strategies have yet to be studied.

Keywords: nickel titanium, micro-machining, surface roughness, machinability

Procedia PDF Downloads 327
812 The Incidence of Postoperative Atrial Fibrillation after Coronary Artery Bypass Grafting in Patients with Local and Diffuse Coronary Artery Disease

Authors: Kamil Ganaev, Elina Vlasova, Andrei Shiryaev, Renat Akchurin

Abstract:

De novo atrial fibrillation (AF) after coronary artery bypass grafting (CABG) is a common complication. To date, there are no data on the possible effect of diffuse lesions of coronary arteries on the incidence of postoperative AF complications. Methods. Patients operated on-pump under hypothermic conditions during the calendar year (2020) were studied. Inclusion criteria - isolated CABG and achievement of complete myocardial revascularization. Patients with a history of AF moderate and severe valve dysfunction, hormonal thyroid pathology, initial CHF(Congestive heart failure), as well as patients with developed perioperative complications (IM, acute heart failure, massive blood loss) and deceased were excluded. Thus 227 patients were included; mean age 65±9 years; 69% were men. 89% of patients had a 3-vessel lesion of the coronary artery; the remainder had a 2-vessel lesion. Mean LV size: 3.9±0.3 cm, indexed LV volume: 29.4±5.3 mL/m2. Two groups were considered: D (n=98), patients with diffuse coronary heart disease, and L (n=129), patients with local coronary heart disease. Clinical and demographic characteristics in the groups were comparable. Rhythm assessment: continuous bedside ECG monitoring up to 5 days; ECG CT at 5-7 days after CABG; daily routine ECG registration. Follow-up period - postoperative hospital period. Results. The Median follow-up period was 9 (7;11) days. POFP (Postoperative atrial fibrillation) was detected in 61/227 (27%) patients: 34/98 (35%) in group D versus 27/129 (21%) in group L; p<0.05. Moreover, the values of revascularization index in groups D and L (3.9±0.7 and 3.8±0.5, respectively) were equal, and the mean time Cardiopulmonary bypass (CPB) (107±27 and 80±13min), as well as the mean ischemic time (67±17 and 55±11min) were significantly longer in group D (p<0.05). However, a separate analysis of these parameters in patients with and without developed AF did not reveal any significant differences in group D (CPB time 99±21.2 min, ischemic time 63±12.2 min), or in group L (CPB time 88±13.1 min, ischemic time 58.7±13.2 min). Conclusion. With the diffuse nature of coronary lesions, the incidence of AF in the hospital period after isolated CABG definitely increases. To better understand the role of severe coronary atherosclerosis in the development of POAF, it is necessary to distinguish the influence of organic features of atrial and ventricular myocardium (as a consequence of chronic coronary disease) from the features of surgical correction in diffuse coronary lesions.

Keywords: atrial fibrillation, diffuse coronary artery disease, coronary artery bypass grafting, local coronary artery disease

Procedia PDF Downloads 196
811 Impact of Lack of Testing on Patient Recovery in the Early Phase of COVID-19: Narratively Collected Perspectives from a Remote Monitoring Program

Authors: Nicki Mohammadi, Emma Reford, Natalia Romano Spica, Laura Tabacof, Jenna Tosto-Mancuso, David Putrino, Christopher P. Kellner

Abstract:

Introductory Statement: The onset of the COVID-19 pandemic demanded an unprecedented need for the rapid development, dispersal, and application of infection testing. However, despite the impressive mobilization of resources, individuals were incredibly limited in their access to tests, particularly during the initial months of the pandemic (March-April 2020) in New York City (NYC). Access to COVID-19 testing is crucial in understanding patients’ illness experiences and integral to the development of COVID-19 standard-of-care protocols, especially in the context of overall access to healthcare resources. Succinct Description of basic methodologies: 18 Patients in a COVID-19 Remote Patient Monitoring Program (Precision Recovery within the Mount Sinai Health System) were interviewed regarding their experience with COVID-19 during the first wave (March-May 2020) of the COVID-19 pandemic in New York City. Patients were asked about their experiences navigating COVID-19 diagnoses, the health care system, and their recovery process. Transcribed interviews were analyzed for thematic codes, using grounded theory to guide the identification of emergent themes and codebook development through an iterative process. Data coding was performed using NVivo12. References for the domain “testing” were then extracted and analyzed for themes and statistical patterns. Clear Indication of Major Findings of the study: 100% of participants (18/18) referenced COVID-19 testing in their interviews, with a total of 79 references across the 18 transcripts (average: 4.4 references/interview; 2.7% interview coverage). 89% of participants (16/18) discussed the difficulty of access to testing, including denial of testing without high severity of symptoms, geographical distance to the testing site, and lack of testing resources at healthcare centers. Participants shared varying perspectives on how the lack of certainty regarding their COVID-19 status affected their course of recovery. One participant shared that because she never tested positive she was shielded from her anxiety and fear, given the death toll in NYC. Another group of participants shared that not having a concrete status to share with family, friends and professionals affected how seriously onlookers took their symptoms. Furthermore, the absence of a positive test barred some individuals from access to treatment programs and employment support. Concluding Statement: Lack of access to COVID-19 testing in the first wave of the pandemic in NYC was a prominent element of patients’ illness experience, particularly during their recovery phase. While for some the lack of concrete results was protective, most emphasized the invalidating effect this had on the perception of illness for both self and others. COVID-19 testing is now widely accessible; however, those who are unable to demonstrate a positive test result but who are still presumed to have had COVID-19 in the first wave must continue to adapt to and live with the effects of this gap in knowledge and care on their recovery. Future efforts are required to ensure that patients do not face barriers to care due to the lack of testing and are reassured regarding their access to healthcare. Affiliations- 1Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY 2Abilities Research Center, Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NY

Keywords: accessibility, COVID-19, recovery, testing

Procedia PDF Downloads 176
810 Validation of an Impedance-Based Flow Cytometry Technique for High-Throughput Nanotoxicity Screening

Authors: Melanie Ostermann, Eivind Birkeland, Ying Xue, Alexander Sauter, Mihaela R. Cimpan

Abstract:

Background: New reliable and robust techniques to assess biological effects of nanomaterials (NMs) in vitro are needed to speed up safety analysis and to identify key physicochemical parameters of NMs, which are responsible for their acute cytotoxicity. The central aim of this study was to validate and evaluate the applicability and reliability of an impedance-based flow cytometry (IFC) technique for the high-throughput screening of NMs. Methods: Eight inorganic NMs from the European Commission Joint Research Centre Repository were used: NM-302 and NM-300k (Ag: 200 nm rods and 16.7 nm spheres, respectively), NM-200 and NM- 203 (SiO₂: 18.3 nm and 24.7 nm amorphous, respectively), NM-100 and NM-101 (TiO₂: 100 nm and 6 nm anatase, respectively), and NM-110 and NM-111 (ZnO: 147 nm and 141 nm, respectively). The aim was to assess the biological effects of these materials on human monoblastoid (U937) cells. Dispersions of NMs were prepared as described in the NANOGENOTOX dispersion protocol and cells were exposed to NMs at relevant concentrations (2, 10, 20, 50, and 100 µg/mL) for 24 hrs. The change in electrical impedance was measured at 0.5, 2, 6, and 12 MHz using the IFC AmphaZ30 (Amphasys AG, Switzerland). A traditional toxicity assay, Trypan Blue Dye Exclusion assay, and dark-field microscopy were used to validate the IFC method. Results: Spherical Ag particles (NM-300K) showed the highest toxic effect on U937 cells followed by ZnO (NM-111 ≥ NM-110) particles. Silica particles were moderate to non-toxic at all used concentrations under these conditions. A higher toxic effect was seen with smaller sized TiO2 particles (NM-101) compared to their larger analogues (NM-100). No interferences between the IFC and the used NMs were seen. Uptake and internalization of NMs were observed after 24 hours exposure, confirming actual NM-cell interactions. Conclusion: Results collected with the IFC demonstrate the applicability of this method for rapid nanotoxicity assessment, which proved to be less prone to nano-related interference issues compared to some traditional toxicity assays. Furthermore, this label-free and novel technique shows good potential for up-scaling in directions of an automated high-throughput screening and for future NM toxicity assessment. This work was supported by the EC FP7 NANoREG (Grant Agreement NMP4-LA-2013-310584), the Research Council of Norway, project NorNANoREG (239199/O70), the EuroNanoMed II 'GEMN' project (246672), and the UH-Nett Vest project.

Keywords: cytotoxicity, high-throughput, impedance, nanomaterials

Procedia PDF Downloads 344
809 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study

Authors: Thomas Arink, Isam Janajreh

Abstract:

The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).

Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires

Procedia PDF Downloads 503
808 Strategic Analysis of Loss of Urban Heritage in Bhopal City Due to Infrastructure Development

Authors: Aishwarya K. V., Shreya Sudesh

Abstract:

Built along the edges of a 11th century CE man-made lake, the city of Bhopal has stood witness to historic layers dating back to Palaeolithic times; early and medieval kingdoms ranging from the Parmaras, Pratiharas to tribal Gonds; the Begum-Nawabs and finally became the Capital of Madhya Pradesh, post-Independence. The lake more popularly called the Upper Lake was created by the King Raja Bhoj from the Parmara dynasty in 1010 AD when he constructed a bund wall across the Kolans river. Atop this bund wall lies the Kamlapati Mahal - which was part of the royal enclosure built in 1702 belonging to the Gond Kingdom. The Mahal is the epicentre of development in the city because it lies in the centre of the axis joining the Old core and New City. Rapid urbanisation descended upon the city once it became the administrative capital of Madhya Pradesh, a newly-formed state of an Independent India. Industrial pockets began being set up and refugees from the Indo-Pakistan separation settled in various regions of the city. To cater to these sudden growth, there was a boom in infrastructure development in the late twentieth century which included precarious decisions made in terms of handling heritage sites causing the destruction of significant parts of the historic fabric. And this practice continues to this day as buffer/ protected zones are breached through exemptions and the absence of robust regulations allow further deterioration of urban heritage. The aim of the research is to systematically study in detail the effect of the urban infrastructure development of the city and its adverse effect on the existing heritage fabric. Through the paper, an attempt to study the parameters involved in preparing the Masterplan of the city and other development projects is done. The research would follow a values-led approach to study the heritage fabric where the significance of the place is assessed based on the values attributed by stakeholders. This approach will involve collection and analysis of site data, assessment of the significance of the site and listing of potential. The study would also attempt to arrive at a solution to deal with urban development along with the protection of the heritage fabric.

Keywords: heritage management, infrastructure development, urban conservation, urban heritage

Procedia PDF Downloads 212
807 The Impact of Climate Change on Typical Material Degradation Criteria over Timurid Historical Heritage

Authors: Hamed Hedayatnia, Nathan Van Den Bossche

Abstract:

Understanding the ways in which climate change accelerates or slows down the process of material deterioration is the first step towards assessing adaptive approaches for the conservation of historical heritage. Analysis of the climate change effects on the degradation risk assessment parameters like freeze-thaw cycles and wind erosion is also a key parameter when considering mitigating actions. Due to the vulnerability of cultural heritage to climate change, the impact of this phenomenon on material degradation criteria with the focus on brick masonry walls in Timurid heritage, located in Iran, was studied. The Timurids were the final great dynasty to emerge from the Central Asian steppe. Through their patronage, the eastern Islamic world in northwestern of Iran, especially in Mashhad and Herat, became a prominent cultural center. Goharshad Mosque is a mosque in Mashhad of the Razavi Khorasan Province, Iran. It was built by order of Empress Goharshad, the wife of Shah Rukh of the Timurid dynasty in 1418 CE. Choosing an appropriate regional climate model was the first step. The outputs of two different climate model: the 'ALARO-0' and 'REMO,' were analyzed to find out which model is more adopted to the area. For validating the quality of the models, a comparison between model data and observations was done in 4 different climate zones in Iran for a period of 30 years. The impacts of the projected climate change were evaluated until 2100. To determine the material specification of Timurid bricks, standard brick samples from a Timurid mosque were studied. Determination of water absorption coefficient, defining the diffusion properties and determination of real density, and total porosity tests were performed to characterize the specifications of brick masonry walls, which is needed for running HAM-simulations. Results from the analysis showed that the threatening factors in each climate zone are almost different, but the most effective factor around Iran is the extreme temperature increase and erosion. In the north-western region of Iran, one of the key factors is wind erosion. In the north, rainfall erosion and mold growth risk are the key factors. In the north-eastern part, in which our case study is located, the important parameter is wind erosion.

Keywords: brick, climate change, degradation criteria, heritage, Timurid period

Procedia PDF Downloads 105
806 Developing and integrated Clinical Risk Management Model

Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei

Abstract:

Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.

Keywords: failure modes and effective analysis, risk management, root cause analysis, model

Procedia PDF Downloads 229
805 Tension-Free Vaginal Tape Secur (TVT Secur) versus Tension-Free Vaginal Tape-Obturator (TVT-O) from inside to outside in Surgical Management of Genuine Stress Urinary Incontinence

Authors: Ibrahim Mohamed Ibrahim Hassanin, Hany Hassan Mostafa, Mona Mohamed Shaban, Ahlam El Said Kamel

Abstract:

Background: New so-called minimally invasive devices have been developed to limit groin pain after sling placement for treatment of stress urinary incontinence (SUI) to minimize the risk of postoperative pain and organ perforation. A new generation of suburethral slings was described that avoided skin incision to pull out and tension the sling. Evaluation of this device through prospective short-term series has shown controversial results compared with other tension-free techniques. The aim of this study is to compare success rates and complications for tension-free vaginal tape secur (TVT secur) and trans-obturator sub urethral tape inside-out technique (TVT-O) for treatment of stress urinary incontinence (SUI). Materials and Methods: Fifty patients with genuine SUI were divided into two groups: group S (n=25) were operated upon using (TVT secur) and group O (n=25) were operated upon using trans-obturator suburethral tape inside-out technique (TVT-O). Success rate, quality of life and postoperative complications such as groin pain, urgency, urine retention and vaginal tape erosion were reported in both groups at one, three, and six months after surgery. Results: As regards objective cure rate at one, three, six months intervals; there was a significant difference between group S (56%, 64%, and 60%), and group O (80%, 88%, and 88%) respectively (P <0.05). As regards subjective cure rate at one, three, six months intervals; there was a significant difference between group S (44%, 44%, and 48%), and group O (76%, 80%, and 80%) respectively (P <0.05). Quality of life (QoL) parameters improved significantly in cured patients with no difference between both groups. As regards complications, group O had a higher frequency of complications than group S; groin pain (12% vs 12% p= 0.05), urgency (4% (1 case) vs 0%), urine retention (8% (2 cases) vs 0%), vaginal tape erosion (4% (1 case) vs 0%). No cases were complicated with wound infection. Conclusion: Compared to TVT secur, TVT-O showed higher subjective and objective cure rates after six months but higher rate of complications. Both techniques were comparable as regards improvement of quality of life after surgery.

Keywords: stress urinary incontinence, trans-vaginal tape-obturator, TVT Secur, TVT-O

Procedia PDF Downloads 341
804 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains

Authors: Jing Jin

Abstract:

The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.

Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry

Procedia PDF Downloads 40
803 Groundwater Influences Wellbeing of Farmers from Semi-Arid Areas of India: Assessment of Subjective Wellbeing

Authors: Seemabahen Dave, Maria Varua, Basant Maheshwari, Roger Packham

Abstract:

The declining groundwater levels and quality are acknowledged to be affecting the well-being of farmers especially those located in the semi-arid regions where groundwater is the only source of water for domestic and agricultural use. Further, previous studies have identified the need to examine the quality of life of farmers beyond economic parameters and for a shift in setting rural development policy goals to the perspective of beneficiaries. To address these gaps, this paper attempts to ascertain the subjective wellbeing of farmers from two semi-arid regions of India. The study employs the integrated conceptual framework for the assessment of individual and regional subjective wellbeing developed by Larson in 2009 at Australia. The method integrates three domains i.e. society, natural environment and economic services consisting of 37 wellbeing factors. The original set of 27 revised wellbeing factors identified by John Ward is further revised in current study to make it more region specific. Generally, researchers in past studies select factors of wellbeing based on literature and assign the weights arbitrary. In contrast, the present methodology employs a unique approach by asking respondents to identify the factors most important to their wellbeing and assign weights of importance based on their responses. This method minimises the selection bias and assesses the wellbeing from farmers’ perspectives. The primary objectives of this study are to identify key wellbeing attributes and to assess the influence of groundwater on subjective wellbeing of farmers. Findings from 507 farmers from 11 villages of two watershed areas of Rajasthan and Gujarat, India chosen randomly and were surveyed using a structured face-to-face questionnaire are presented in this paper. The results indicate that significant differences exist in the ranking of wellbeing factors at individual, village and regional levels. The top five most important factors in the study areas include electricity, irrigation infrastructure, housing, land ownership, and income. However, respondents are also most dissatisfied with these factors and correspondingly perceive a high influence of groundwater on them. The results thus indicate that intervention related to improvement of groundwater availability and quality will greatly improve the satisfaction level of well-being factors identified by the farmers.

Keywords: groundwater, farmers, semi-arid regions, subjective wellbeing

Procedia PDF Downloads 238
802 Analyzing Electromagnetic and Geometric Characterization of Building Insulation Materials Using the Transient Radar Method (TRM)

Authors: Ali Pourkazemi

Abstract:

The transient radar method (TRM) is one of the non-destructive methods that was introduced by authors a few years ago. The transient radar method can be classified as a wave-based non destructive testing (NDT) method that can be used in a wide frequency range. Nevertheless, it requires a narrow band, ranging from a few GHz to a few THz, depending on the application. As a time-of-flight and real-time method, TRM can measure the electromagnetic properties of the sample under test not only quickly and accurately, but also blindly. This means that it requires no prior knowledge of the sample under test. For multi-layer structures, TRM is not only able to detect changes related to any parameter within the multi-layer structure but can also measure the electromagnetic properties of each layer and its thickness individually. Although the temperature, humidity, and general environmental conditions may affect the sample under test, they do not affect the accuracy of the Blind TRM algorithm. In this paper, the electromagnetic properties as well as the thickness of the individual building insulation materials - as a single-layer structure - are measured experimentally. Finally, the correlation between the reflection coefficients and some other technical parameters such as sound insulation, thermal resistance, thermal conductivity, compressive strength, and density is investigated. The sample to be studied is 30 cm x 50 cm and the thickness of the samples varies from a few millimeters to 6 centimeters. This experiment is performed with both biostatic and differential hardware at 10 GHz. Since it is a narrow-band system, high-speed computation for analysis, free-space application, and real-time sensor, it has a wide range of potential applications, e.g., in the construction industry, rubber industry, piping industry, wind energy industry, automotive industry, biotechnology, food industry, pharmaceuticals, etc. Detection of metallic, plastic pipes wires, etc. through or behind the walls are specific applications for the construction industry.

Keywords: transient radar method, blind electromagnetic geometrical parameter extraction technique, ultrafast nondestructive multilayer dielectric structure characterization, electronic measurement systems, illumination, data acquisition performance, submillimeter depth resolution, time-dependent reflected electromagnetic signal blind analysis method, EM signal blind analysis method, time domain reflectometer, microwave, milimeter wave frequencies

Procedia PDF Downloads 53
801 Virtual Metrology for Copper Clad Laminate Manufacturing

Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho

Abstract:

In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.

Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology

Procedia PDF Downloads 339
800 Scrutinizing the Effective Parameters on Cuttings Movement in Deviated Wells: Experimental Study

Authors: Siyamak Sarafraz, Reza Esmaeil Pour, Saeed Jamshidi, Asghar Molaei Dehkordi

Abstract:

Cutting transport is one of the major problems in directional and extended reach oil and gas wells. Lack of sufficient attention to this issue may bring some troubles such as casing running, stuck pipe, excessive torque and drag, hole pack off, bit wear, decreased the rate of penetration (ROP), increased equivalent circulation density (ECD) and logging. Since it is practically impossible to directly observe the behavior of deep wells, a test setup was designed to investigate cutting transport phenomena. This experimental work carried out to scrutiny behavior of the effective variables in cutting transport. The test setup contained a test section with 17 feet long that made of a 3.28 feet long transparent glass pipe with 3 inch diameter, a storage tank with 100 liters capacity, drill pipe rotation which made of stainless steel with 1.25 inches diameter, pump to circulate drilling fluid, valve to adjust flow rate, bit and a camera to record all events which then converted to RGB images via the Image Processing Toolbox. After preparation of test process, each test performed separately, and weights of the output particles were measured and compared with each other. Observation charts were plotted to assess the behavior of viscosity, flow rate and RPM in inclinations of 0°, 30°, 60° and 90°. RPM was explored with other variables such as flow rate and viscosity in different angles. Also, effect of different flow rate was investigated in directional conditions. To access the precise results, captured image were analyzed to find out bed thickening and particles behave in the annulus. The results of this experimental study demonstrate that drill string rotation helps particles to be suspension and reduce the particle deposition cutting movement increased significantly. By raising fluid velocity, laminar flow converted to turbulence flow in the annulus. Increases in flow rate in horizontal section by considering a lower range of viscosity is more effective and improved cuttings transport performance.

Keywords: cutting transport, directional drilling, flow rate, hole cleaning, pipe rotation

Procedia PDF Downloads 264
799 Effect of Methanol Root Extracts of Moringa Oleifera on Lipid Profile Parameters, Atherogenic Indices and HMG – CoA Reductase Activities of Poloxamer 407-Induced Hyperlipidemic Rats

Authors: Matthew Ocheleka Itodo, Ogo Agbo Ogo, Agnes Ogbene Abutu, Bawa Inalegwu

Abstract:

Hyperlipidemia is characterised by elevated serum total cholesterol and low density and very low-density lipoprotein cholesterol and decreased high-density lipoprotein are the risk factor for coronary heart diseases. There are claims by traditional medicine practitioners in Nigeria that Moringa oleifera plants are used for the treatment of cardiovascular diseases, but it appears there is no scientific research and, publication or documented work to verify these claims. This study aimed to determine the effect of methanol root extracts of Moringa oleifera on Lipid profile, Atherogenic indices and 3 hydroxyl 3 methylglutaryl Coenzyme A reductase activity of poloxamer 407-induced hyperlipidemic rats. The animals were grouped into 8; Group 1: Normal control, Group 2: Hyperlipidemic control. Groups 2 to 8 were induced with Poloxamer 407 1000 mg/Kg body weight. However, group 3 were treated with standard drugs (atorvastatin). Group 4 was treated with crude extract, and groups 5 to 8 were treated with purified fractions from column chromatography. The preliminary antihyperlipidemic study showed Methanol root extract at 200 mg/kg body weight significantly (p≤0.05) decreased total cholesterol, low-density lipoprotein, triacylglyceride, 3 hydroxyls 3 methylglutaryl Coenzyme A reductase, and increase high-density lipoprotein of hyperlipidemic treated groups. Screening the extracts for the most potent anti-hyperlipidemic activity reveals that fraction 1 of Total Cholesterol and Fraction 3 of Triacylglyceride have the highest percentage reduction of 56% and 51%, respectively. The atherogenic risk factor of all induced treated rats shows a significant (p<0.05) decrease in levels of Castelli’s risk index II, atherogenic index of plasma and a significant (p<0.05) higher level of Castelli’s risk index I ratio. The study shows that the methanol extract of root possesses antihyperlipidemic effects and may explain why it has been found to be useful in the management of cardiovascular diseases by traditional medicine practitioners.

Keywords: hyperlipidemia, moringa oleifera, poloxamer 407, lipid profile

Procedia PDF Downloads 58
798 Optimization of Temperature Coefficients for MEMS Based Piezoresistive Pressure Sensor

Authors: Vijay Kumar, Jaspreet Singh, Manoj Wadhwa

Abstract:

Piezo-resistive pressure sensors were one of the first developed micromechanical system (MEMS) devices and still display a significant growth prompted by the advancements in micromachining techniques and material technology. In MEMS based piezo-resistive pressure sensors, temperature can be considered as the main environmental condition which affects the system performance. The study of the thermal behavior of these sensors is essential to define the parameters that cause the output characteristics to drift. In this work, a study on the effects of temperature and doping concentration in a boron implanted piezoresistor for a silicon-based pressure sensor is discussed. We have optimized the temperature coefficient of resistance (TCR) and temperature coefficient of sensitivity (TCS) values to determine the effect of temperature drift on the sensor performance. To be more precise, in order to reduce the temperature drift, a high doping concentration is needed. And it is well known that the Wheatstone bridge in a pressure sensor is supplied with a constant voltage or a constant current input supply. With a constant voltage supply, the thermal drift can be compensated along with an external compensation circuit, whereas the thermal drift in the constant current supply can be directly compensated by the bridge itself. But it would be beneficial to also compensate the temperature coefficient of piezoresistors so as to further reduce the temperature drift. So, with a current supply, the TCS is dependent on both the TCπ and TCR. As TCπ is a negative quantity and TCR is a positive quantity, it is possible to choose an appropriate doping concentration at which both of them cancel each other. An exact cancellation of TCR and TCπ values is not readily attainable; therefore, an adjustable approach is generally used in practical applications. Thus, one goal of this work has been to better understand the origin of temperature drift in pressure sensor devices so that the temperature effects can be minimized or eliminated. This paper describes the optimum doping levels for the piezoresistors where the TCS of the pressure transducers will be zero due to the cancellation of TCR and TCπ values. Also, the fabrication and characterization of the pressure sensor are carried out. The optimized TCR value obtained for the fabricated die is 2300 ± 100ppm/ᵒC, for which the piezoresistors are implanted at a doping concentration of 5E13 ions/cm³ and the TCS value of -2100ppm/ᵒC is achieved. Therefore, the desired TCR and TCS value is achieved, which are approximately equal to each other, so the thermal effects are considerably reduced. Finally, we have calculated the effect of temperature and doping concentration on the output characteristics of the sensor. This study allows us to predict the sensor behavior against temperature and to minimize this effect by optimizing the doping concentration.

Keywords: piezo-resistive, pressure sensor, doping concentration, TCR, TCS

Procedia PDF Downloads 167
797 The Short-Term Stress Indicators in Home and Experimental Dogs

Authors: Madara Nikolajenko, Jevgenija Kondratjeva

Abstract:

Stress is a response of the body to physical or psychological environmental stressors. Cortisol level in blood serum is determined as the main indicator of stress, but the blood collection, the animal preparation and other activities can cause unpleasant conditions and induce increase of these hormones. Therefore, less invasive methods are searched to determine stress hormone levels, for example, by measuring the cortisol level saliva. The aim of the study is to find out the changes of stress hormones in blood and saliva in home and experimental dogs in simulated short-term stress conditions. The study included clinically healthy experimental beagle dogs (n=6) and clinically healthy home American Staffordshire terriers (n=6). The animals were let into a fenced area to adapt. Loud drum sounds (in cooperation with 'Andžeja Grauda drum school') were used as a stressor. Blood serum samples were taken for sodium, potassium, glucose and cortisol level determination and saliva samples for cortisol determination only. Control parameters were taken immediately before the start of the stressor, and next samples were taken immediately after the stress. The last measurements were taken two hours after the stress. Electrolyte levels in blood serum were determined using direction selective electrode method (ILab Aries analyzer) and cortisol in blood serum and saliva using electrochemical luminescence method (Roche Diagnostics). Blood glucose level was measured with glucometer (ACCU-CHECK Active test strips). Cortisol level in the blood increased immediately after the stress in all home dogs (P < 0,05), but only in 33% (P < 0,05) of the experimental dogs. After two hours the measurement decreased in 83% (P < 0,05) of home dogs (in 50% returning to the control point) and in 83% (P < 0,05) of the experimental dogs. Cortisol in saliva immediately after the stress increased in 50% (P > 0,05) of home dogs and in 33% (P > 0,05) of the experimental dogs. After two hours in 83% (P > 0,05) of the home animals, the measurements decreased, only in 17% of the experimental dogs it decreased as well, while in 49% measurement was undetectable due to the lack of material. Blood sodium, potassium, and glucose measurements did not show any significant changes. The combination of short-term stress indicators, when, after the stressor, all indicators should immediately increase and decrease after two hours, confirmed in none of the animals. Therefore the authors can conclude that each animal responds to a stressful situation with different physiological mechanisms and hormonal activity. Cortisol level in saliva and blood is released with the different speed and is not an objective indicator of acute stress.

Keywords: animal behaivor, cortisol, short-term stress, stress indicators

Procedia PDF Downloads 251
796 An Advanced Numerical Tool for the Design of Through-Thickness Reinforced Composites for Electrical Applications

Authors: Bing Zhang, Jingyi Zhang, Mudan Chen

Abstract:

Fibre-reinforced polymer (FRP) composites have been extensively utilised in various industries due to their high specific strength, e.g., aerospace, renewable energy, automotive, and marine. However, they have relatively low electrical conductivity than metals, especially in the out-of-plane direction. Conductive metal strips or meshes are typically employed to protect composites when designing lightweight structures that may be subjected to lightning strikes, such as composite wings. Unfortunately, this approach downplays the lightweight advantages of FRP composites, thereby limiting their potential applications. Extensive studies have been undertaken to improve the electrical conductivity of FRP composites. The authors are amongst the pioneers who use through-thickness reinforcement (TTR) to tailor the electrical conductivity of composites. Compared to the conventional approaches using conductive fillers, the through-thickness reinforcement approach has been proven to be able to offer a much larger improvement to the through-thickness conductivity of composites. In this study, an advanced high-fidelity numerical modelling strategy is presented to investigate the effects of through-thickness reinforcement on both the in-plane and out-of-plane electrical conductivities of FRP composites. The critical micro-structural features of through-thickness reinforced composites incorporated in the modelling framework are 1) the fibre waviness formed due to TTR insertion; 2) the resin-rich pockets formed due to resin flow in the curing process following TTR insertion; 3) the fibre crimp, i.e., fibre distortion in the thickness direction of composites caused by TTR insertion forces. In addition, each interlaminar interface is described separately. An IMA/M21 composite laminate with a quasi-isotropic stacking sequence is employed to calibrate and verify the modelling framework. The modelling results agree well with experimental measurements for bothering in-plane and out-plane conductivities. It has been found that the presence of conductive TTR can increase the out-of-plane conductivity by around one order, but there is less improvement in the in-plane conductivity, even at the TTR areal density of 0.1%. This numerical tool provides valuable references as a design tool for through-thickness reinforced composites when exploring their electrical applications. Parametric studies are undertaken using the numerical tool to investigate critical parameters that affect the electrical conductivities of composites, including TTR material, TTR areal density, stacking sequence, and interlaminar conductivity. Suggestions regarding the design of electrical through-thickness reinforced composites are derived from the numerical modelling campaign.

Keywords: composite structures, design, electrical conductivity, numerical modelling, through-thickness reinforcement

Procedia PDF Downloads 67
795 Value Generation of Construction and Demolition Waste Originated in the Building Rehabilitation to Improve Energy Efficiency; From Waste to Resources

Authors: Mercedes Del Rio Merino, Jaime Santacruz Astorqui, Paola Villoria Saez, Carmen Viñas Arrebola

Abstract:

The lack of treatment of the waste from construction and demolition waste (CDW) is a problem that must be solved immediately. It is estimated that in the world not to use CDW generates an increase in the use of new materials close to 20% of the total value of the materials used. The problem is even greater in case these wastes are considered hazardous because the final deposition of them may also generate significant contamination. Therefore, the possibility of including CDW in the manufacturing of building materials, represents an interesting alternative to ensure their use and to reduce their possible risk. In this context and in the last years, many researches are being carried out in order to analyze the viability of using CDW as a substitute for the traditional raw material of high environmental impact. Even though it is true, much remains to be done, because these works generally characterize materials but not specific applications that allow the agents of the construction to have the guarantees required by the projects. Therefore, it is necessary the involvement of all the actors included in the life cycle of these new construction materials, and also to promote its use for, for example, definition of standards, tax advantages or market intervention is necessary. This paper presents the main findings reached in "Waste to resources (W2R)" project since it began in October 2014. The main goal of the project is to develop new materials, elements and construction systems, manufactured from CDW, to be used in improving the energy efficiency of buildings. Other objectives of the project are: to quantify the CDW generated in the energy rehabilitation works, specifically wastes from the building envelope; and to study the traceability of CDW generated and promote CDW reuse and recycle in order to get close to the life cycle of buildings, generating zero waste and reducing the ecological footprint of the construction sector. This paper determines the most important aspects to consider during the design of new constructive solutions, which improve the energy efficiency of buildings and what materials made with CDW would be the most suitable for that. Also, a survey to select best practices for reducing "close to zero waste" in refurbishment was done. Finally, several pilot rehabilitation works conform the parameters analyzed in the project were selected, in order to apply the results and thus compare the theoretical with reality. Acknowledgements: This research was supported by the Spanish State Secretariat for Research, Development and Innovation of the Ministry of Economy and Competitiveness under "Waste 2 Resources" Project (BIA2013-43061-R).

Keywords: building waste, construction and demolition waste, recycling, resources

Procedia PDF Downloads 232
794 A Descriptive Study of Turkish Straits System on Dynamics of Environmental Factors Causing Maritime Accidents

Authors: Gizem Kodak, Alper Unal, Birsen Koldemir, Tayfun Acarer

Abstract:

Turkish Straits System which consists of Istanbul Strait (Bosphorus), Canakkale Strait (Dardanelles) and the Marmara Sea has a strategical location on international maritime as it is a unique waterway between the Mediterranean Sea, Black Sea and the Aegean Sea. Thus, this area has great importance since it is the only waterway between Black Sea countries and the rest of the World. Turkish Straits System has dangerous environmental factors hosts more vessel every day through developing World trade and this situation results in expanding accident risks day by day. Today, a lot of precautions have been taken to ensure safe navigation and to prevent maritime accidents, and international standards are followed to avoid maritime accidents. Despite this, the environmental factors that affect this area, trigger the maritime accidents and threaten the vessels with new accidents risks in different months with different hazards. This descriptive study consists of temporal and spatial analyses of environmental factors causing maritime accidents. This study also aims at contributing to safety navigation including monthly and regionally characteristics of variables. In this context, two different data sets are created consisting of environmental factors and accidents. This descriptive study on the accidents between 2001 and 2017 the mentioned region also studies the months and places of the accidents with environmental factor variables. Environmental factor variables are categorized as dynamic and static factors. Dynamic factors are appointed as meteorological and oceanographical while static factors are appointed as geological factors that threaten safety navigation with geometrical restricts. The variables that form dynamic factors are approached meteorological as wind direction, wind speed, wave altitude and visibility. The circulations and properties of the water mass on the system are studied as oceanographical properties. At the end of the study, the efficient meteorological and oceanographical parameters on the region are presented monthly and regionally. By this way, we acquired the monthly, seasonal and regional distributions of the accidents. Upon the analyses that are done; The Turkish Straits System that connects the Black Sea countries with the other countries and which is one of the most important parts of the world trade; is analyzed on temporal and spatial dimensions on the reasons of the accidents and have been presented as environmental factor dynamics causing maritime accidents.

Keywords: descriptive study, environmental factors, maritime accidents, statistics

Procedia PDF Downloads 177
793 Tumour-Associated Tissue Eosinophilia as a Prognosticator in Oral Squamous Cell Carcinoma

Authors: Karen Boaz, C. R. Charan

Abstract:

Background: The infiltration of tumour stroma by eosinophils, Tumor-Associated Tissue Eosinophilia (TATE), is known to modulate the progression of Oral Squamous Cell Carcinoma (OSCC). Eosinophils have direct tumoricidal activity by release of cytotoxic proteins and indirectly they enhance permeability into tumor cells enabling penetration of tumoricidal cytokines. Also, eosinophils may promote tumor angiogenesis by production of several angiogenic factors. Identification of eosinophils in the inflammatory stroma has been proven to be an important prognosticator in cancers of mouth, oesophagus, larynx, pharynx, breast, lung, and intestine. Therefore, the study aimed to correlate TATE with clinical and histopathological variables, and blood eosinophil count to assess the role of TATE as a prognosticator in Oral Squamous Cell Carcinoma (OSCC). Methods: Seventy two biopsy-proven cases of OSCC formed the study cohort. Blood eosinophil counts and TNM stage were obtained from the medical records. Tissue sections (5µm thick) were stained with Haematoxylin and Eosin. The eosinophils were quantified at invasive tumour front (ITF) in 10HPF (40x magnification) with an ocular grid. Bryne’s grading of ITF was also performed. A subset of thirty cases was also assessed for association of TATE with recurrence, involvement of lymph nodes and surgical margins. Results: 1) No statistically significant correlation was found between TATE and TNM stage, blood eosinophil counts and most parameters of Bryne’s grading system. 2) Statistically significant relation of intense degree of TATE was associated with the absence of distant metastasis, increased lympho-plasmacytic response and increased survival (diseasefree and overall) of OSCC patients. 3) In the subset of 30 cases, tissue eosinophil counts were higher in cases with lymph node involvement, decreased survival, without margin involvement and in cases that did not recur. Conclusion: While the role of eosinophils in mediating immune responses seems ambiguous as eosinophils support cell-mediated tumour immunity in early stages while inhibiting the same in advanced stages, TATE may be used as a surrogate marker for determination of prognosis in oral squamous cell carcinoma.

Keywords: tumour-associated tissue eosinophilia, oral squamous cell carcinoma, prognosticator, tumoral immunity

Procedia PDF Downloads 227
792 Frequency Domain Decomposition, Stochastic Subspace Identification and Continuous Wavelet Transform for Operational Modal Analysis of Three Story Steel Frame

Authors: Ardalan Sabamehr, Ashutosh Bagchi

Abstract:

Recently, Structural Health Monitoring (SHM) based on the vibration of structures has attracted the attention of researchers in different fields such as: civil, aeronautical and mechanical engineering. Operational Modal Analysis (OMA) have been developed to identify modal properties of infrastructure such as bridge, building and so on. Frequency Domain Decomposition (FDD), Stochastic Subspace Identification (SSI) and Continuous Wavelet Transform (CWT) are the three most common methods in output only modal identification. FDD, SSI, and CWT operate based on the frequency domain, time domain, and time-frequency plane respectively. So, FDD and SSI are not able to display time and frequency at the same time. By the way, FDD and SSI have some difficulties in a noisy environment and finding the closed modes. CWT technique which is currently developed works on time-frequency plane and a reasonable performance in such condition. The other advantage of wavelet transform rather than other current techniques is that it can be applied for the non-stationary signal as well. The aim of this paper is to compare three most common modal identification techniques to find modal properties (such as natural frequency, mode shape, and damping ratio) of three story steel frame which was built in Concordia University Lab by use of ambient vibration. The frame has made of Galvanized steel with 60 cm length, 27 cm width and 133 cm height with no brace along the long span and short space. Three uniaxial wired accelerations (MicroStarin with 100mv/g accuracy) have been attached to the middle of each floor and gateway receives the data and send to the PC by use of Node Commander Software. The real-time monitoring has been performed for 20 seconds with 512 Hz sampling rate. The test is repeated for 5 times in each direction by hand shaking and impact hammer. CWT is able to detect instantaneous frequency by used of ridge detection method. In this paper, partial derivative ridge detection technique has been applied to the local maxima of time-frequency plane to detect the instantaneous frequency. The extracted result from all three methods have been compared, and it demonstrated that CWT has the better performance in term of its accuracy in noisy environment. The modal parameters such as natural frequency, damping ratio and mode shapes are identified from all three methods.

Keywords: ambient vibration, frequency domain decomposition, stochastic subspace identification, continuous wavelet transform

Procedia PDF Downloads 274
791 Quantitative Texture Analysis of Shoulder Sonography for Rotator Cuff Lesion Classification

Authors: Chung-Ming Lo, Chung-Chien Lee

Abstract:

In many countries, the lifetime prevalence of shoulder pain is up to 70%. In America, the health care system spends 7 billion per year about the healthy issues of shoulder pain. With respect to the origin, up to 70% of shoulder pain is attributed to rotator cuff lesions This study proposed a computer-aided diagnosis (CAD) system to assist radiologists classifying rotator cuff lesions with less operator dependence. Quantitative features were extracted from the shoulder ultrasound images acquired using an ALOKA alpha-6 US scanner (Hitachi-Aloka Medical, Tokyo, Japan) with linear array probe (scan width: 36mm) ranging from 5 to 13 MHz. During examination, the postures of the examined patients are standard sitting position and are followed by the regular routine. After acquisition, the shoulder US images were drawn out from the scanner and stored as 8-bit images with pixel value ranging from 0 to 255. Upon the sonographic appearance, the boundary of each lesion was delineated by a physician to indicate the specific pattern for analysis. The three lesion categories for classification were composed of 20 cases of tendon inflammation, 18 cases of calcific tendonitis, and 18 cases of supraspinatus tear. For each lesion, second-order statistics were quantified in the feature extraction. The second-order statistics were the texture features describing the correlations between adjacent pixels in a lesion. Because echogenicity patterns were expressed via grey-scale. The grey-scale co-occurrence matrixes with four angles of adjacent pixels were used. The texture metrics included the mean and standard deviation of energy, entropy, correlation, inverse different moment, inertia, cluster shade, cluster prominence, and Haralick correlation. Then, the quantitative features were combined in a multinomial logistic regression classifier to generate a prediction model of rotator cuff lesions. Multinomial logistic regression classifier is widely used in the classification of more than two categories such as the three lesion types used in this study. In the classifier, backward elimination was used to select a feature subset which is the most relevant. They were selected from the trained classifier with the lowest error rate. Leave-one-out cross-validation was used to evaluate the performance of the classifier. Each case was left out of the total cases and used to test the trained result by the remaining cases. According to the physician’s assessment, the performance of the proposed CAD system was shown by the accuracy. As a result, the proposed system achieved an accuracy of 86%. A CAD system based on the statistical texture features to interpret echogenicity values in shoulder musculoskeletal ultrasound was established to generate a prediction model for rotator cuff lesions. Clinically, it is difficult to distinguish some kinds of rotator cuff lesions, especially partial-thickness tear of rotator cuff. The shoulder orthopaedic surgeon and musculoskeletal radiologist reported greater diagnostic test accuracy than general radiologist or ultrasonographers based on the available literature. Consequently, the proposed CAD system which was developed according to the experiment of the shoulder orthopaedic surgeon can provide reliable suggestions to general radiologists or ultrasonographers. More quantitative features related to the specific patterns of different lesion types would be investigated in the further study to improve the prediction.

Keywords: shoulder ultrasound, rotator cuff lesions, texture, computer-aided diagnosis

Procedia PDF Downloads 267
790 Novel Low-cost Bubble CPAP as an Alternative Non-invasive Oxygen Therapy for Newborn Infants with Respiratory Distress Syndrome in a Tertiary Level Neonatal Intensive Care Unit in the Philippines: A Single Blind Randomized Controlled Trial

Authors: Navid P Roodaki, Rochelle Abila, Daisy Evangeline Garcia

Abstract:

Background and Objective: Respiratory Distress Syndrome (RDS) among premature infants is a major causes of neonatal death. The use of Continuous Positive Airway Pressure (CPAP) has become a standard of care for preterm newborns with RDS hence cost-effective innovations are needed. This study compared a novel low-cost Bubble CPAP (bCPAP) device to ventilator driven CPAP in the treatment of RDS. Methods: This is a single-blind, randomized controlled trial done on May 2022 to October 2022 in a Level III Neonatal Intensive Care Unit in the Philippines. Preterm newborns (<36 weeks) with RDS were randomized to receive Vayu bCPAP device or Ventilator-derived CPAP. Arterial Blood Gases, Oxygen Saturation, administration of surfactant, and CPAP failure rates were measured. Results: Seventy preterm newborns were included. No differences were observed between the Ventilator driven CPAP and Vayu bCPAP on the PaO2 (97.51mmHg vs 97.37mmHg), So2 (97.08% vs 95.60%) levels, amount of surfactant administered between groups. There were no observed differences in CPAP failure rates between Vayu bPCAP (x̄ 3.23 days) and ventilator-driven CPAP (x̄ 2.98 days). However, a significant difference was noted on the CO2 level (40.32mmHg vs 50.70mmHg), which was higher among those hooked to Ventilator-driven CPAP (p 0.004). Conclusion: This study has shown that the novel low-cost bubble CPAP (Vayu bCPAP) can be used as an efficacious alternate non invasive oxygen therapy among preterm neonates with RDS, although the CO2 levels were higher among those hooked to ventilator driven CPAP, other outcome parameters measured showed that both devices are comparable. Recommendation: A multi-center or national study to account for geographic region, which may alter the outcomes of patients connected to different ventilatory support. Cost comparison between devices is also suggested. A mixed-method research assessing the experiences of health care professionals in assembling and utilizing the gadget is a second consideration.

Keywords: bubble CPAP, ventilator-derived CPAP; infant, premature, respiratory distress syndrome

Procedia PDF Downloads 65
789 Information Pollution: Exploratory Analysis of Subs-Saharan African Media’s Capabilities to Combat Misinformation and Disinformation

Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi

Abstract:

The role of information in societal development and growth cannot be over-emphasized. It has remained an age-long strategy to adopt the information flow to make an egalitarian society. The same has become a tool for throwing society into chaos and anarchy. It has been adopted as a weapon of war and a veritable instrument of psychological warfare with a variety of uses. That is why some scholars posit that information could be deployed as a weapon to wreak “Mass Destruction" or promote “Mass Development". When used as a tool for destruction, the effect on society is like an atomic bomb which when it is released, pollutes the air and suffocates the people. Technological advancement has further exposed the latent power of information and many societies seem to be overwhelmed by its negative effect. While information remains one of the bedrock of democracy, the information ecosystem across the world is currently facing a more difficult battle than ever before due to information pluralism and technological advancement. The more the agents involved try to combat its menace, the difficult and complex it is proving to be curbed. In a region like Africa with dangling democracy enfolds with complexities of multi-religion, multi-cultures, inter-tribes, ongoing issues that are yet to be resolved, it is important to pay critical attention to the case of information disorder and find appropriate ways to curb or mitigate its effects. The media, being the middleman in the distribution of information, needs to build capacities and capabilities to separate the whiff of misinformation and disinformation from the grains of truthful data. From quasi-statistical senses, it has been observed that the efforts aimed at fighting information pollution have not considered the built resilience of media organisations against this disorder. Apparently, the efforts, resources and technologies adopted for the conception, production and spread of information pollution are much more sophisticated than approaches to suppress and even reduce its effects on society. Thus, this study seeks to interrogate the phenomenon of information pollution and the capabilities of select media organisations in Sub-Saharan Africa. In doing this, the following questions are probed; what are the media actions to curb the menace of information pollution? Which of these actions are working and how effective are they? And which of the actions are not working and why they are not working? Adopting quantitative and qualitative approaches and anchored on the Dynamic Capability Theory, the study aims at digging up insights to further understand the complexities of information pollution, media capabilities and strategic resources for managing misinformation and disinformation in the region. The quantitative approach involves surveys and the use of questionnaires to get data from journalists on their understanding of misinformation/disinformation and their capabilities to gate-keep. Case Analysis of select media and content analysis of their strategic resources to manage misinformation and disinformation is adopted in the study while the qualitative approach will involve an In-depth Interview to have a more robust analysis is also considered. The study is critical in the fight against information pollution for a number of reasons. One, it is a novel attempt to document the level of media capabilities to fight the phenomenon of information disorder. Two, the study will enable the region to have a clear understanding of the capabilities of existing media organizations to combat misinformation and disinformation in the countries that make up the region. Recommendations emanating from the study could be used to initiate, intensify or review existing approaches to combat the menace of information pollution in the region.

Keywords: disinformation, information pollution, misinformation, media capabilities, sub-Saharan Africa

Procedia PDF Downloads 147