Search results for: comparing partitions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1997

Search results for: comparing partitions

587 Application of Hydrological Engineering Centre – River Analysis System (HEC-RAS) to Estuarine Hydraulics

Authors: Julia Zimmerman, Gaurav Savant

Abstract:

This study aims to evaluate the efficacy of the U.S. Army Corp of Engineers’ River Analysis System (HEC-RAS) application to modeling the hydraulics of estuaries. HEC-RAS has been broadly used for a variety of riverine applications. However, it has not been widely applied to the study of circulation in estuaries. This report details the model development and validation of a combined 1D/2D unsteady flow hydraulic model using HEC-RAS for estuaries and they are associated with tidally influenced rivers. Two estuaries, Galveston Bay and Delaware Bay, were used as case studies. Galveston Bay, a bar-built, vertically mixed estuary, was modeled for the 2005 calendar year. Delaware Bay, a drowned river valley estuary, was modeled from October 22, 2019, to November 5, 2019. Water surface elevation was used to validate both models by comparing simulation results to NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) gauge data. Simulations were run using the Diffusion Wave Equations (DW), the Shallow Water Equations, Eulerian-Lagrangian Method (SWE-ELM), and the Shallow Water Equations Eulerian Method (SWE-EM) and compared for both accuracy and computational resources required. In general, the Diffusion Wave Equations results were found to be comparable to the two Shallow Water equations sets while requiring less computational power. The 1D/2D combined approach was valid for study areas within the 2D flow area, with the 1D flow serving mainly as an inflow boundary condition. Within the Delaware Bay estuary, the HEC-RAS DW model ran in 22 minutes and had an average R² value of 0.94 within the 2-D mesh. The Galveston Bay HEC-RAS DW ran in 6 hours and 47 minutes and had an average R² value of 0.83 within the 2-D mesh. The longer run time and lower R² for Galveston Bay can be attributed to the increased length of the time frame modeled and the greater complexity of the estuarine system. The models did not accurately capture tidal effects within the 1D flow area.

Keywords: Delaware bay, estuarine hydraulics, Galveston bay, HEC-RAS, one-dimensional modeling, two-dimensional modeling

Procedia PDF Downloads 198
586 An Approach to Autonomous Drones Using Deep Reinforcement Learning and Object Detection

Authors: K. R. Roopesh Bharatwaj, Avinash Maharana, Favour Tobi Aborisade, Roger Young

Abstract:

Presently, there are few cases of complete automation of drones and its allied intelligence capabilities. In essence, the potential of the drone has not yet been fully utilized. This paper presents feasible methods to build an intelligent drone with smart capabilities such as self-driving, and obstacle avoidance. It does this through advanced Reinforcement Learning Techniques and performs object detection using latest advanced algorithms, which are capable of processing light weight models with fast training in real time instances. For the scope of this paper, after researching on the various algorithms and comparing them, we finally implemented the Deep-Q-Networks (DQN) algorithm in the AirSim Simulator. In future works, we plan to implement further advanced self-driving and object detection algorithms, we also plan to implement voice-based speech recognition for the entire drone operation which would provide an option of speech communication between users (People) and the drone in the time of unavoidable circumstances. Thus, making drones an interactive intelligent Robotic Voice Enabled Service Assistant. This proposed drone has a wide scope of usability and is applicable in scenarios such as Disaster management, Air Transport of essentials, Agriculture, Manufacturing, Monitoring people movements in public area, and Defense. Also discussed, is the entire drone communication based on the satellite broadband Internet technology for faster computation and seamless communication service for uninterrupted network during disasters and remote location operations. This paper will explain the feasible algorithms required to go about achieving this goal and is more of a reference paper for future researchers going down this path.

Keywords: convolution neural network, natural language processing, obstacle avoidance, satellite broadband technology, self-driving

Procedia PDF Downloads 249
585 The Effect of Primary Treatment on Histopathological Patterns and Choice of Neck Dissection in Regional Failure of Nasopharyngeal Carcinoma Patients

Authors: Ralene Sim, Stefan Mueller, N. Gopalakrishna Iyer, Ngian Chye Tan, Khee Chee Soo, R. Shetty Mahalakshmi, Hiang Khoon Tan

Abstract:

Background: Regional failure in nasopharyngeal carcinoma (NPC) is managed by salvage treatment in the form of neck dissection. Radical neck dissection (RND) is preferred over modified radical neck dissection (MRND) since it is traditionally believed to offer better long-term disease control. However, with the advent of more advanced imaging modalities like high-resolution Magnetic Resonance Imaging, Computed Tomography, and Positron Emission Tomography-CT scans, earlier detection is achieved. Additionally, concurrent chemotherapy also contributes to reduced tumour burden. Hence, there may be a lesser need for an RND and a greater role for MRND. With this retrospective study, the primary aim is to ascertain whether MRND, as opposed to RND, has similar outcomes and hence, whether there would be more grounds to offer a less aggressive procedure to achieve lower patient morbidity. Methods: This is a retrospective study of 66 NPC patients treated at Singapore General Hospital between 1994 to 2016 for histologically proven regional recurrence, of which 41 patients underwent RND and 25 who underwent MRND, based on surgeon preference. The type of ND performed, primary treatment mode, adjuvant treatment, and pattern of recurrence were reviewed. Overall survival (OS) was calculated using Kaplan-Meier estimate and compared. Results: Overall, the disease parameters such as nodal involvement and extranodal extension were comparable between the two groups. Comparing MRND and RND, the median (IQR) OS is 1.76 (0.58 to 3.49) and 2.41 (0.78 to 4.11) respectively. However, the p-value found is 0.5301 and hence not statistically significant. Conclusion: RND is more aggressive and has been associated with greater morbidity. Hence, with similar outcomes, MRND could be an alternative salvage procedure for regional failure in selected NPC patients, allowing similar salvage rates with lesser mortality and morbidity.

Keywords: nasopharyngeal carcinoma, neck dissection, modified neck dissection, radical neck dissection

Procedia PDF Downloads 169
584 Development of Hydrodynamic Drag Calculation and Cavity Shape Generation for Supercavitating Torpedoes

Authors: Sertac Arslan, Sezer Kefeli

Abstract:

In this paper, firstly supercavitating phenomenon and supercavity shape design parameters are explained and then drag force calculation methods of high speed supercavitating torpedoes are investigated with numerical techniques and verified with empirical studies. In order to reach huge speeds such as 200, 300 knots for underwater vehicles, hydrodynamic hull drag force which is proportional to density of water (ρ) and square of speed should be reduced. Conventional heavy weight torpedoes could reach up to ~50 knots by classic underwater hydrodynamic techniques. However, to exceed 50 knots and reach about 200 knots speeds, hydrodynamic viscous forces must be reduced or eliminated completely. This requirement revives supercavitation phenomena that could be implemented to conventional torpedoes. Supercavitation is the use of cavitation effects to create a gas bubble, allowing the torpedo to move at huge speed through the water by being fully developed cavitation bubble. When the torpedo moves in a cavitation envelope due to cavitator in nose section and solid fuel rocket engine in rear section, this kind of torpedoes could be entitled as Supercavitating Torpedoes. There are two types of cavitation; first one is natural cavitation, and second one is ventilated cavitation. In this study, disk cavitator is modeled with natural cavitation and supercavitation phenomenon parameters are studied. Moreover, drag force calculation is performed for disk shape cavitator with numerical techniques and compared via empirical studies. Drag forces are calculated with computational fluid dynamics methods and different empirical methods. Numerical calculation method is developed by comparing with empirical results. In verification study cavitation number (σ), drag coefficient (CD) and drag force (D), cavity wall velocity (U

Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavity flows

Procedia PDF Downloads 187
583 Preliminary Study of the Cost-Effectiveness of Green Walls: Analyzing Cases from the Perspective of Life Cycle

Authors: Jyun-Huei Huang, Ting-I Lee

Abstract:

Urban heat island effect is derived from the reduction of vegetative cover by urban development. Because plants can improve air quality and microclimate, green walls have been applied as a sustainable design approach to cool building temperature. By using plants to green vertical surfaces, they decrease room temperature and, as a result, decrease the energy use for air conditioning. Based on their structures, green walls can be divided into two categories, green façades and living walls. A green façade uses the climbing ability of a plant itself, while a living wall assembles planter modules. The latter one is widely adopted in public space, as it is time-effective and less limited. Although a living wall saves energy spent on cooling, it is not necessarily cost-effective from the perspective of a lifecycle analysis. The Italian study shows that the overall benefit of a living wall is only greater than its costs after 47 years of its establishment. In Taiwan, urban greening policies encourage establishment of green walls by referring to their benefits of energy saving while neglecting their low performance on cost-effectiveness. Thus, this research aims at understanding the perception of appliers and consumers on the cost-effectiveness of their living wall products from the lifecycle viewpoint. It adopts semi-structured interviews and field observations on the maintenance of the products. By comparing the two results, it generates insights for sustainable urban greening policies. The preliminary finding shows that stakeholders do not have a holistic sense of lifecycle or cost-effectiveness. Most importantly, a living wall well maintained is often with high input due to the availability of its maintenance budget, and thus less sustainable. In conclusion, without a comprehensive sense of cost-effectiveness throughout a product’s lifecycle, it is very difficult for suppliers and consumers to maintain a living wall system while achieve sustainability.

Keywords: case study, maintenance, post-occupancy evaluation, vertical greening

Procedia PDF Downloads 265
582 Effect of Good Agriculture Management Practices and Constraints on Grape Farming: A Case Study in Mirbachakot, Kalakan and Shakardara Districts Kabul, Afghanistan

Authors: Mohammad Mirwais Yusufi

Abstract:

Skillful management is one of the most important success factors for today’s farms. When a farm is well managed, it can generate funds for its sustainability. Grape is one of the most diffused fruits in the world and one of the most important cash crops with high potential of production in Afghanistan as well. While there are several organizations intervening for improvement of this cash crop, the quality and quantity are still not satisfactory for producers and external markets. The situation has not changed over the years. Therefore, a survey was conducted in 2017 with 60 grape growers, supported by questionnaires in Mirbachakot, Kalakan and Shakardara districts of Kabul province. The purpose was to get an understanding of the current socio-demographic characteristics of farmers, management methods, constraints, farm size, yield and contribution of grape farming to household income. Findings indicate that grape farming was predominant 83.3% male, 16.6% female and small-scale farmers were the main grape producers, 60% < 1 ha of land under grape production. Likewise, 50% had more than > 10 years and 33.3% between 1-5 years’ experience in grape farming. The high level of illiteracy and diseases had significant digit effect on growth, yield and quality of grapes. The results showed that vineyard management operations to protect grapes from mechanical damage are very poor or completely absent. Comparing developed countries, table grape is one of the fruits with the highest input of technology, while in developing countries the cost of labor is low but the purchase of the equipment is very high due to financial situation. Hence the low quality and quantity of grape are influenced by poor management methods, such as non-availability of experts and lack of technical guidance in the study site. Thereby, the study suggested that improved agricultural extension services and managerial skills could contribute to addressing the problems.

Keywords: constraints, effect, management, Kabul

Procedia PDF Downloads 109
581 Influence of Chelators, Zn Sulphate and Silicic Acid on Productivity and Meat Quality of Fattening Pigs

Authors: A. Raceviciute-Stupeliene, V. Sasyte, V. Viliene, V. Slausgalvis, J. Al-Saifi, R. Gruzauskas

Abstract:

The objective of this study was to investigate the influence of special additives such as chelators, zinc sulphate and silicic acid on productivity parameters, carcass characteristics and meat quality of fattening pigs. The test started with 40 days old fattening pigs (mongrel (mother) and Yorkshire (father)) and lasted up to 156 days of age. During the fattening period, 32 pigs were divided into 2 groups (control and experimental) with 4 replicates (total of 8 pens).  The pigs were fed for 16 weeks’ ad libitum with a standard wheat-barley-soybean meal compound (Control group) supplemented with chelators, zinc sulphate and silicic acid (dosage 2 kg/t of feed, Experimental group). Meat traits in live pigs were measured by ultrasonic equipment Piglog 105. The results obtained throughout the experimental period suggest that supplementation of chelators, zinc sulphate and silicic acid tend to positively affect average daily gain and feed conversion ratio of pigs for fattening (p < 0.05). Pigs’ evaluation with Piglog 105 showed that thickness of fat in the first and second point was by 4% and 3% respectively higher in comparison to the control group (p < 0.05). Carcass weight, yield, and length, also thickness of fat showed no significant difference among the groups. The water holding capacity of meat in Experimental group was lower by 5.28%, and tenderness – lower by 12% compared with that of the pigs in the Control group (p < 0.05). Regarding pigs’ meat chemical composition of the experimental group, a statistically significant difference comparing with the data of the control group was not determined. Cholesterol concentration in muscles of pigs fed diets supplemented with chelators, zinc sulphate and silicic acid was lower by 7.93 mg/100 g of muscle in comparison to that of the control group. These results suggest that supplementation of chelators, zinc sulphate and silicic acid in the feed for fattening pigs had significant effect on pigs growing performance and meat quality.

Keywords: silicic acid, chelators, meat quality, pigs, zinc sulphate

Procedia PDF Downloads 179
580 Voluntary Work Monetary Value and Cost-Benefit Analysis with 'Value Audit and Voluntary Investment' Technique: Case Study of Yazd Red Crescent Society Youth Members Voluntary Work in Health and Safety Plan for New Year's Passengers

Authors: Hamed Seddighi Khavidak

Abstract:

Voluntary work has a lot of economic and social benefits for a country, but the economic value is ignored because it is voluntary. The aim of this study is reviewing Monetary Value of Voluntary Work methods and comparing opportunity cost method and replacement cost method both in theory and in practice. Beside monetary value, in this study, we discuss cost-benefit analysis of health and safety plan in the New Year that conducted by young volunteers of Red Crescent society of Iran. Method: We discussed eight methods for monetary value of voluntary work including: Alternative-Employment Wage Approach, Leisure-Adjusted OCA, Volunteer Judgment OCA, Replacement Wage Approach, Volunteer Judgment RWA, Supervisor Judgment RWA, Cost of Counterpart Goods and Services and Beneficiary Judgment. Also, for cost benefit analysis we drew on 'value audit and volunteer investment' (VIVA) technique that is used widely in voluntary organizations like international federation of Red Cross and Red Crescent societies. Findings: In this study, using replacement cost approach, voluntary work by 1034 youth volunteers was valued 938000000 Riyals and using Replacement Wage Approach it was valued 2268713232 Riyals. Moreover, Yazd Red Crescent Society spent 212800000 Riyals on food and other costs for these volunteers. Discussion and conclusion: In this study, using cost benefit analysis method that is Volunteer Investment and Value Audit (VIVA), VIVA rate showed that for every Riyal that the Red Crescent Society invested in the health and safety of New Year's travelers in its volunteer project, four Riyals returned, and using the wage replacement approach, 11 Riyals returned. Therefore, New Year's travelers health and safety project were successful and economically, it was worthwhile for the Red Crescent Society because the output was much bigger than the input costs.

Keywords: voluntary work, monetary value, youth, red crescent society

Procedia PDF Downloads 215
579 A Case Study on Theme-Based Approach in Health Technology Engineering Education: Customer Oriented Software Applications

Authors: Mikael Soini, Kari Björn

Abstract:

Metropolia University of Applied Sciences (MUAS) Information and Communication Technology (ICT) Degree Programme provides full-time Bachelor-level undergraduate studies. ICT Degree Programme has seven different major options; this paper focuses on Health Technology. In Health Technology, a significant curriculum change in 2014 enabled transition from fragmented curriculum including dozens of courses to a new integrated curriculum built around three 30 ECTS themes. This paper focuses especially on the second theme called Customer Oriented Software Applications. From students’ point of view, the goal of this theme is to get familiar with existing health related ICT solutions and systems, understand business around health technology, recognize social and healthcare operating principles and services, and identify customers and users and their special needs and perspectives. This also acts as a background for health related web application development. Built web application is tested, developed and evaluated with real users utilizing versatile user centred development methods. This paper presents experiences obtained from the first implementation of Customer Oriented Software Applications theme. Student feedback was gathered with two questionnaires, one in the middle of the theme and other at the end of the theme. Questionnaires had qualitative and quantitative parts. Similar questionnaire was implemented in the first theme; this paper evaluates how the theme-based integrated curriculum has progressed in Health Technology major by comparing results between theme 1 and 2. In general, students were satisfied for the implementation, timing and synchronization of the courses, and the amount of work. However there is still room for development. Student feedback and teachers’ observations have been and will be used to develop the content and operating principles of the themes and whole curriculum.

Keywords: engineering education, integrated curriculum, learning and teaching methods, learning experience

Procedia PDF Downloads 320
578 Some Characteristics Based on Literature, for an Ideal Disinfectant

Authors: Saimir Heta, Ilma Robo, Rialda Xhizdari, Kers Kapaj

Abstract:

The stability of an ideal disinfectant should be constant regardless of the change in the atmospheric conditions of the environment where it is kept. If the conditions such as temperature or humidity change, it is understood that it will also be necessary to approach possible changes in the holding materials such as plastic or glass bottles with the aim of protecting, for example, the disinfectant from the excessive lighting of the environment, which can also be translated as an increase in the temperature of disinfectant as a fluid. Material and Methods: In this study, an attempt was made to find the most recent published data about the best possible combination of disinfectants indicated for use after dental procedures. This purpose of the study was realized by comparing the basic literature that is studied in the field of dentistry by students with the most published data in the literature of recent years about this topic. Each disinfectant is represented by a number called the disinfectant count, in which different factors can influence the increase or reduction of variables whose production remains a specific statistic for a specific disinfectant. Results: The changes in the atmospheric conditions where the disinfectant is deposited and stored in the environment are known to affect the stability of the disinfectant as a fluid; this fact is known and even cited in the leaflets accompanying the manufactured boxes of disinfectants. It is these cares, in the form of advice, which are based not only on the preservation of the disinfectant but also on the application in order to have the desired clinical result. Aldehydes have the highest constant among the types of disinfectants, followed by acids. The lowest value of the constant belongs to the class of glycols, the predecessors of which were the halogens, in which class there are some representatives with disinfection applications. The class of phenols and acids have almost the same intervals of constants. Conclusions: If the goal were to find the ideal disinfectant among the large variety of disinfectants produced, a good starting point would be to find something unchanging or a fixed, unchanging element on the basis of which the comparison can be made properties of different disinfectants. Precisely based on the results of this study, the role of the specific constant according to the specific disinfectant is highlighted. Finding an ideal disinfectant, like finding a medication or the ideal antibiotic, is an ongoing but unattainable goal.

Keywords: different disinfectants, ideal, specific constant, dental procedures

Procedia PDF Downloads 73
577 Clinical Comparative Study Comparing Efficacy of Intrathecal Fentanyl and Magnesium as an Adjuvant to Hyperbaric Bupivacaine in Mild Pre-Eclamptic Patients Undergoing Caesarean Section

Authors: Sanchita B. Sarma, M. P. Nath

Abstract:

Adequate analgesia following caesarean section decreases morbidity, hastens ambulation, improves patient outcome and facilitates care of the newborn. Intrathecal magnesium, an NMDA antagonist, has been shown to prolong analgesia without significant side effects in healthy parturients. The aim of this study was to evaluate the onset and duration of sensory and motor block, hemodynamic effect, postoperative analgesia, and adverse effects of magnesium or fentanyl given intrathecally with hyperbaric 0.5% bupivacaine in patients with mild preeclampsia undergoing caesarean section. Sixty women with mild preeclampsia undergoing elective caesarean section were included in a prospective, double blind, controlled trial. Patients were randomly assigned to receive spinal anesthesia with 2 mL 0.5% hyperbaric bupivacaine with 12.5 µg fentanyl (group F) or 0.1 ml of 50% magnesium sulphate (50 mg) (group M) with 0.15ml preservative free distilled water. Onset, duration and recovery of sensory and motor block, time to maximum sensory block, duration of spinal anaesthesia and postoperative analgesic requirements were studied. Statistical comparison was carried out using the Chi-square or Fisher’s exact tests and Independent Student’s t-test where appropriate. The onset of both sensory and motor block was slower in the magnesium group. The duration of spinal anaesthesia (246 vs. 284) and motor block (186.3 vs. 210) were significantly longer in the magnesium group. Total analgesic top up requirement was less in group M. Hemodynamic parameters were similar in both the groups. Intrathecal magnesium caused minimal side effects. Since Fentanyl and other opioid congeners are not available throughout the country easily, magnesium with its easy availability and less side effect profile can be a cost effective alternative to fentanyl in managing pregnancy induced hypertension (PIH) patients given along with Bupivacaine intrathecally in caesarean section.

Keywords: analgesia, magnesium, pre eclampsia, spinal anaesthesia

Procedia PDF Downloads 320
576 Nano-Plasmonic Diagnostic Sensor Using Ultraflat Single-Crystalline Au Nanoplate and Cysteine-Tagged Protein G

Authors: Hwang Ahreum, Kang Taejoon, Kim Bongsoo

Abstract:

Nanosensors for high sensitive detection of diseases have been widely studied to improve the quality of life. Here, we suggest robust nano-plasmonic diagnostic sensor using cysteine tagged protein G (Cys3-protein G) and ultraflat, ultraclean and single-crystalline Au nanoplates. Protein G formed on an ultraflat Au surface provides ideal background for dense and uniform immobilization of antibodies. The Au is highly stable in diverse biochemical environment and can immobilize antibodies easily through Au-S bonding, having been widely used for various biosensing applications. Especially, atomically smooth single-crystalline Au nanomaterials synthesized using chemical vapor transport (CVT) method are very suitable to fabricate reproducible sensitive sensors. As the C-reactive protein (CRP) is a nonspecific biomarker of inflammation and infection, it can be used as a predictive or prognostic marker for various cardiovascular diseases. Cys3-protein G immobilized uniformly on the Au nanoplate enable CRP antibody (anti-CRP) to be ordered in a correct orientation, making their binding capacity be maximized for CRP detection. Immobilization condition for the Cys3-protein G and anti-CRP on the Au nanoplate is optimized visually by AFM analysis. Au nanoparticle - Au nanoplate (NPs-on-Au nanoplate) assembly fabricated from sandwich immunoassay for CRP can reduce zero-signal extremely caused by nonspecific bindings, providing a distinct surface-enhanced Raman scattering (SERS) enhancement still in 10-18 M of CRP concentration. Moreover, the NP-on-Au nanoplate sensor shows an excellent selectivity against non-target proteins with high concentration. In addition, comparing with control experiments employing a Au film fabricated by e-beam assisted deposition and linker molecule, we validate clearly contribution of the Au nanoplate for the attomolar sensitive detection of CRP. We expect that the devised platform employing the complex of single-crystalline Au nanoplates and Cys3-protein G can be applied for detection of many other cancer biomarkers.

Keywords: Au nanoplate, biomarker, diagnostic sensor, protein G, SERS

Procedia PDF Downloads 257
575 Facilitating Factors for the Success of Mobile Service Providers in Bangkok Metropolitan

Authors: Yananda Siraphatthada

Abstract:

The objectives of this research were to study the level of influencing factors, leadership, supply chain management, innovation, competitive advantages, business success, and affecting factors to the business success of the mobile phone system service providers in Bangkok Metropolitan. This research was done by the quantitative approach and the qualitative approach. The quantitative approach was used for questionnaires to collect data from the 331 mobile service shop managers franchised by AIS, Dtac and TrueMove. The mobile phone system service providers/shop managers were randomly stratified and proportionally allocated into subgroups exclusive to the number of the providers in each network. In terms of qualitative method, there were in-depth interviews of 6 mobile service providers/managers of Telewiz and Dtac and TrueMove shop to find the agreement or disagreement with the content analysis method. Descriptive Statistics, including Frequency, Percentage, Means and Standard Deviation were employed; also, the Structural Equation Model (SEM) was used as a tool for data analysis. The content analysis method was applied to identify key patterns emerging from the interview responses. The two data sets were brought together for comparing and contrasting to make the findings, providing triangulation to enrich result interpretation. It revealed that the level of the influencing factors – leadership, innovation management, supply chain management, and business competitiveness had an impact at a great level, but that the level of factors, innovation and the business, financial success and nonbusiness financial success of the mobile phone system service providers in Bangkok Metropolitan, is at the highest level. Moreover, the business influencing factors, competitive advantages in the business of mobile system service providers which were leadership, supply chain management, innovation management, business advantages, and business success, had statistical significance at .01 which corresponded to the data from the interviews.

Keywords: mobile service providers, facilitating factors, Bangkok Metropolitan, business success

Procedia PDF Downloads 346
574 Recurrence of Pterygium after Surgery and the Effect of Surgical Technique on the Recurrence of Pterygium in Patients with Pterygium

Authors: Luksanaporn Krungkraipetch

Abstract:

A pterygium is an eye surface lesion that begins in the limbal conjunctiva and progresses to the cornea. The lesion is more common in the nasal limbus than in the temporal, and it has a distinctive wing-like aspect. Indications for surgery, in decreasing order of significance, are grown over the corneal center, decreased vision due to corneal deformation, documented growth, sensations of discomfort, and aesthetic concerns. Recurrent pterygium results in the loss of time, the expense of therapy, and the potential for vision impairment. The objective of this study is to find out how often the recurrence of pterygium after surgery occurs, what effect the surgery technique has, and what causes them to come back in people with pterygium. Materials and Methods: Observational case control in retrospect: the study involves a retrospective analysis of 164 patient samples. Data analysis is descriptive statistics analysis, i.e., basic data details about pterygium surgery and the risk of recurrent pterygium. For factor analysis, the inferential statistics odds ratio (OR) and 95% confidence interval (CI) ANOVA are utilized. A p-value of 0.05 was deemed statistically important. Results: The majority of patients, according to the results, were female (60.4%). Twenty-four of the 164 (14.6%) patients who underwent surgery exhibited recurrent pterygium. The average age is 55.33 years old. Postoperative recurrence was reported in 19 cases (79.3%) of bare sclera techniques and five cases (20.8%) of conjunctival autograft techniques. The recurrence interval is 10.25 months, with the most common (54.17 percent) being 12 months. In 91.67 percent of cases, all follow-ups are successful. The most common recurrence level is 1 (25%). A surgical complication is a subconjunctival hemorrhage (33.33 percent). Comparing the surgeries done on people with recurrent pterygium didn't show anything important (F = 1.13, p = 0.339). Age significantly affected the recurrence of pterygium (95% CI, 6.79-63.56; OR = 20.78, P 0.001). Conclusion: This study discovered a 14.6% rate of pterygium recurrence after pterygium surgery. Across all surgeries and patients, the rate of recurrence was four times higher with the bare sclera method than with conjunctival autograft. The researchers advise selecting a more conventional surgical technique to avoid a recurrence.

Keywords: pterygium, recurrence pterygium, pterygium surgery, excision pterygium

Procedia PDF Downloads 87
573 Osteoprotegerin and Osteoprotegerin/TRAIL Ratio are Associated with Cardiovascular Dysfunction and Mortality among Patients with Renal Failure

Authors: Marek Kuźniewski, Magdalena B. Kaziuk , Danuta Fedak, Paulina Dumnicka, Ewa Stępień, Beata Kuśnierz-Cabala, Władysław Sułowicz

Abstract:

Background: The high prevalence of cardiovascular morbidity and mortality among patients with chronic kidney disease (CKD) is observed especially in those undergoing dialysis. Osteoprotegerin (OPG) and its ligands, receptor activator of nuclear factor kappa-B ligand (RANKL) and tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) have been associated with cardiovascular complications. Our aim was to study their role as cardiovascular risk factors in stage 5 CKD patients. Methods: OPG, RANKL and TRAIL concentrations were measured in 69 hemodialyzed CKD patients and 35 healthy volunteers. In CKD patients, cardiovascular dysfunction was assessed with aortic pulse wave velocity (AoPWV), carotid artery intima-media thickness (CCA-IMT), coronary artery calcium score (CaSc) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) serum concentration. Cardiovascular and overall mortality data were collected during a 7-years follow-up. Results: OPG plasma concentrations were higher in CKD patients comparing to controls. Total soluble RANKL was lower and OPG/RANKL ratio higher in patients. Soluble TRAIL concentrations did not differ between the groups and OPG/TRAIL ratio was higher in CKD patients. OPG and OPG/TRAIL positively predicted long-term mortality (all-cause and cardiovascular) in CKD patients. OPG positively correlated with AoPWV, CCA-IMT and NT-proBNP whereas OPG/TRAIL with AoPWV and NT-proBNP. Described relationships were independent of classical and non-classical cardiovascular risk factors, with exception of age. Conclusions: Our study confirmed the role of OPG as a biomarker of cardiovascular dysfunction and a predictor of mortality in stage 5 CKD. OPG/TRAIL ratio can be proposed as a predictor of cardiovascular dysfunction and mortality.

Keywords: osteoprotegerin, tumor necrosis factor-related apoptosis-inducing ligand, receptor activator of nuclear factor kappa-B ligand, hemodialysis, chronic kidney disease, cardiovascular disease

Procedia PDF Downloads 334
572 Measurement Technologies for Advanced Characterization of Magnetic Materials Used in Electric Drives and Automotive Applications

Authors: Lukasz Mierczak, Patrick Denke, Piotr Klimczyk, Stefan Siebert

Abstract:

Due to the high complexity of the magnetization in electrical machines and influence of the manufacturing processes on the magnetic properties of their components, the assessment and prediction of hysteresis and eddy current losses has remained a challenge. In the design process of electric motors and generators, the power losses of stators and rotors are calculated based on the material supplier’s data from standard magnetic measurements. This type of data does not include the additional loss from non-sinusoidal multi-harmonic motor excitation nor the detrimental effects of residual stress remaining in the motor laminations after manufacturing processes, such as punching, housing shrink fitting and winding. Moreover, in production, considerable attention is given to the measurements of mechanical dimensions of stator and rotor cores, whereas verification of their magnetic properties is typically neglected, which can lead to inconsistent efficiency of assembled motors. Therefore, to enable a comprehensive characterization of motor materials and components, Brockhaus Measurements developed a range of in-line and offline measurement technologies for testing their magnetic properties under actual motor operating conditions. Multiple sets of experimental data were obtained to evaluate the influence of various factors, such as elevated temperature, applied and residual stress, and arbitrary magnetization on the magnetic properties of different grades of non-oriented steel. Measured power loss for tested samples and stator cores varied significantly, by more than 100%, comparing to standard measurement conditions. Quantitative effects of each of the applied measurement were analyzed. This research and applied Brockhaus measurement methodologies emphasized the requirement for advanced characterization of magnetic materials used in electric drives and automotive applications.

Keywords: magnetic materials, measurement technologies, permanent magnets, stator and rotor cores

Procedia PDF Downloads 139
571 Tobacco Taxation and the Heterogeneity of Smokers' Responses to Price Increases

Authors: Simone Tedeschi, Francesco Crespi, Paolo Liberati, Massimo Paradiso, Antonio Sciala

Abstract:

This paper aims at contributing to the understanding of smokers’ responses to cigarette prices increases with a focus on heterogeneity, both across individuals and price levels. To do this, a stated preference quasi-experimental design grounded in a random utility framework is proposed to evaluate the effect on smokers’ utility of the price level and variation, along with social conditioning and health impact perception. The analysis is based on individual-level data drawn from a unique survey gathering very detailed information on Italian smokers’ habits. In particular, qualitative information on the individual reactions triggered by changes in prices of different magnitude and composition are exploited. The main findings stemming from the analysis are the following; the average price elasticity of cigarette consumption is comparable with previous estimates for advanced economies (-.32). However, the decomposition of this result across five latent-classes of smokers, reveals extreme heterogeneity in terms of price responsiveness, implying a potential price elasticity that ranges between 0.05 to almost 1. Such heterogeneity is in part explained by observable characteristics such as age, income, gender, education as well as (current and lagged) smoking intensity. Moreover, price responsiveness is far from being independent from the size of the prospected price increase. Finally, by comparing even and uneven price variations, it is shown that uniform across-brand price increases are able to limit the scope of product substitutions and downgrade. Estimated price-response heterogeneity has significant implications for tax policy. Among them, first, it provides evidence and a rationale for why the aggregate price elasticity is likely to follow a strictly increasing pattern as a function of the experienced price variation. This information is crucial for forecasting the effect of a given tax-driven price change on tax revenue. Second, it provides some guidance on how to design excise tax reforms to balance public health and revenue goals.

Keywords: smoking behaviour, preference heterogeneity, price responsiveness, cigarette taxation, random utility models

Procedia PDF Downloads 162
570 Determination of Non-CO2 Greenhouse Gas Emission in Electronics Industry

Authors: Bong Jae Lee, Jeong Il Lee, Hyo Su Kim

Abstract:

Both developed and developing countries have adopted the decision to join the Paris agreement to reduce greenhouse gas (GHG) emissions at the Conference of the Parties (COP) 21 meeting in Paris. As a result, the developed and developing countries have to submit the Intended Nationally Determined Contributions (INDC) by 2020, and each country will be assessed for their performance in reducing GHG. After that, they shall propose a reduction target which is higher than the previous target every five years. Therefore, an accurate method for calculating greenhouse gas emissions is essential to be presented as a rational for implementing GHG reduction measures based on the reduction targets. Non-CO2 GHGs (CF4, NF3, N2O, SF6 and so on) are being widely used in fabrication process of semiconductor manufacturing, and etching/deposition process of display manufacturing process. The Global Warming Potential (GWP) value of Non-CO2 is much higher than CO2, which means it will have greater effect on a global warming than CO2. Therefore, GHG calculation methods of the electronics industry are provided by Intergovernmental Panel on climate change (IPCC) and U.S. Environmental Protection Agency (EPA), and it will be discussed at ISO/TC 146 meeting. As discussed earlier, being precise and accurate in calculating Non-CO2 GHG is becoming more important. Thus this study aims to discuss the implications of the calculating methods through comparing the methods of IPCC and EPA. As a conclusion, after analyzing the methods of IPCC & EPA, the method of EPA is more detailed and it also provides the calculation for N2O. In case of the default emission factor (by IPCC & EPA), IPCC provides more conservative results compared to that of EPA; The factor of IPCC was developed for calculating a national GHG emission, while the factor of EPA was specifically developed for the U.S. which means it must have been developed to address the environmental issue of the US. The semiconductor factory ‘A’ measured F gas according to the EPA Destruction and Removal Efficiency (DRE) protocol and estimated their own DRE, and it was observed that their emission factor shows higher DRE compared to default DRE factor of IPCC and EPA Therefore, each country can improve their GHG emission calculation by developing its own emission factor (if possible) at the time of reporting Nationally Determined Contributions (NDC). Acknowledgements: This work was supported by the Korea Evaluation Institute of Industrial Technology (No. 10053589).

Keywords: non-CO2 GHG, GHG emission, electronics industry, measuring method

Procedia PDF Downloads 286
569 Designing Creative Events with Deconstructivism Approach

Authors: Maryam Memarian, Mahmood Naghizadeh

Abstract:

Deconstruction is an approach that is entirely incompatible with the traditional prevalent architecture. Considering the fact that this approach attempts to put architecture in sharp contrast with its opposite events and transpires with attending to the neglected and missing aspects of architecture and deconstructing its stable structures. It also recklessly proceeds beyond the existing frameworks and intends to create a different and more efficient prospect for space. The aim of deconstruction architecture is to satisfy both the prospective and retrospective visions as well as takes into account all tastes of the present in order to transcend time. Likewise, it ventures to fragment the facts and symbols of the past and extract new concepts from within their heart, which coincide with today’s circumstances. Since this approach is an attempt to surpass the limits of the prevalent architecture, it can be employed to design places in which creative events occur and imagination and ambition flourish. Thought-provoking artistic events can grow and mature in such places and be represented in the best way possible to all people. The concept of event proposed in the plan grows out of the interaction between space and creation. In addition to triggering surprise and high impressions, it is also considered as a bold journey into the suspended realms of the traditional conflicts in architecture such as architecture-landscape, interior-exterior, center-margin, product-process, and stability-instability. In this project, at first, through interpretive-historical research method and examining the inputs and data collection, recognition and organizing takes place. After evaluating the obtained data using deductive reasoning, the data is eventually interpreted. Given the fact that the research topic is in its infancy and there is not a similar case in Iran with limited number of corresponding instances across the world, the selected topic helps to shed lights on the unrevealed and neglected parts in architecture. Similarly, criticizing, investigating and comparing specific and highly prized cases in other countries with the project under study can serve as an introduction into this architecture style.

Keywords: anti-architecture, creativity, deconstruction, event

Procedia PDF Downloads 320
568 A Critical Discourse Analysis of Citizenship Education Textbook for Primary School Students in Singapore

Authors: Ren Boyuan

Abstract:

This study focuses on how the Character and Citizenship Education textbook in Singapore primary schools deliver preferred and desired qualities to students and therefore reveals how discourse in textbooks can facilitate and perpetuate certain social practices. In this way, this study also serves to encourage the critical thinking of textbook writers and school educators by unveiling the nuanced message through language use that facilitates the perpetuation of social practices in a society. In Singapore, Character and Citizenship Education is a compulsory subject for primary school students. Under the framework of 21st Century Competencies, Character and Citizenship Education in Singapore aims to help students thrive in this fast-changing world. The Singapore government is involved in the development of CCE curriculum in schools from primary schools to pre-university. Inevitably, the CCE curriculum is not free from ideological influences. This qualitative study utilizes Fairclough’s three-dimensional theory and his framework of three assumptions to analyze the Character and Citizenship Education textbook for Primary 1 and to reveal ideologies in this textbook. Data for the analysis in this study are the textual parts of the whole textbook for Primary 1 students as this book is used at the beginning of citizenship education in primary schools. It is significant because it promotes messages about CCE to the foundation years of a child's education. The findings of this study show that the four revealed ideologies, namely pragmatism, communitarianism, nationalism, and multiculturalism, are not only dated back in the national history but also updated and explained by the current demands for Singapore’s thriving and prosperity in a sustainable term. This study ends with a discussion of the implications of this study. By pointing out the ideologies in this textbook and how they are embedded in the discourse, this study may help teachers and textbook writers realize the possible political involvement in the book and therefore develop their recognition of the implicit influence of lexical choice on their teaching and writing. In addition, by exploring the ideologies in this book and comparing them with ideologies in past textbooks, this study helps researchers in this area on how language influences readers and reflects certain social demands.

Keywords: citizenship education, critical discourse analysis, sociolinguistics, textbook analysis

Procedia PDF Downloads 60
567 Higher Education and the Economy in Western Canada: Is Institutional Autonomy at Risk?

Authors: James Barmby

Abstract:

Canada’s westernmost provinces of British Columbia and Alberta are similar in many respects as they are both reliant on volatile natural resources for major portions of their economies. The two provinces have banded together to develop mutually beneficial trade, investment and labour market mobility rules, but in terms of developing systems of higher education, the two provinces are attempting to align higher education programs to economic development objectives by means that are quite different. In British Columbia, the recently announced initiative, B.C’s Skills for Jobs Blueprint will “make sure education and training programs are aligned with the demands of the labor market.” Meanwhile in Alberta, the province’s institutions of higher education are enjoying the tenth year of their membership in the Campus Alberta Quality Council, which makes recommendations to government on issues related to post-secondary education, including the approval of new programs. In B.C., public institutions of higher education are encouraged to comply with government objectives, and are rewarded with targeted funds for their efforts. In Alberta, the institutions as a system tell the government what programs they want to offer and government can agree or not agree to fund these programs through a ministerial approval process. In comparing the two higher education systems, the question emerges as to which one is more beneficial to the province: the one where change is directed primarily by financial incentives to achieve economic objectives or the one that makes recommendations to the government for changes in programs to achieve institutional objectives? How is institutional autonomy affected in each strategy? Does institutional autonomy matter anymore? In recent years, much has been written in regard to academic freedom, but less about institutional autonomy, which is seen by many as essential to protecting academic freedom. However, while institutional autonomy means freedom from government control, it does not necessarily mean self-government. In this study, a comparison of the two higher education systems is made using recent government policy initiatives in both provinces, and responses to those actions by the higher education institutions. The findings indicate that the economic needs in both provinces take precedence over issues of institutional autonomy.

Keywords: alberta, British Columbia, institutional autonomy, funding

Procedia PDF Downloads 701
566 The Efficacy of Psychological Interventions for Psychosis: A Systematic Review and Network Meta-Analysis

Authors: Radu Soflau, Lia-Ecaterina Oltean

Abstract:

Background: Increasing evidence supports the efficacy of psychological interventions for psychosis. However, it is unclear which one of these interventions is most likely to address negative psychotic symptoms and related outcomes. We aimed to determine the relative efficacy of psychological and psychosocial interventions for negative symptoms, overall psychotic symptoms, and related outcomes. Methods: To attain this goal, we conducted a systematic review and network meta-analysis. We searched for potentially eligible trials in PubMed, EMBASE, PsycInfo, Cochrane Central Register of Controlled Trials, and ClinicalTrials.gov databases up until February 08, 2022. We included randomized controlled trials that investigated the efficacy of psychological for adults with psychosis. We excluded interventions for prodromal or “at risk” individuals, as well as patients with serious co-morbid medical or psychiatric conditions (others than depressive and/or anxiety disorders). Two researchers conducted study selection and performed data extraction independently. Analyses were run using STATA network and mvmeta packages, applying a random effect model under a frequentist framework in order to compute standardized mean differences or risk ratio. Findings: We identified 47844 records and screened 29466 records for eligibility. The majority of eligible interventions were delivered in addition to pharmacological treatment. Treatment as usual (TAU) was the most frequent common comparator. Theoretically driven psychological interventions generally outperformed TAU at post-test and follow-up, displaying small and small-to-medium effect sizes. A similar pattern of results emerged in sensitivity analyses focused on studies that employed an inclusion criterion for relevant negative symptom severity. Conclusion: While the efficacy of some psychological interventions is promising, there is a need for more high-quality studies, as well as more trials directly comparing psychological treatments for negative psychotic symptoms.

Keywords: psychosis, network meta-analysis, psychological interventions, efficacy, negative symptoms

Procedia PDF Downloads 101
565 Protective Effect of Wheat Grass (Triticum Durum) against Oxidative Damage Induced by Lead: Study of Some Biomarkers and Histological Few Organs in Males Wistar Rats

Authors: Mansouri Ouarda, Abdennour Cherif, Saidi Malika

Abstract:

Since the industrial revolution, many anthropogenic activities have caused environmental, considerable and overall changes. The lead represents a very dangerous disruptive for the functioning of the body. In this context the current study aims at evaluating a natural therapy by the use of the plant grass in wheat (Triticum durum) against the toxicity of lead in rat wistar male. The rats were divided into three groups: the control group, the group treated with 600 mg /kg food of lead only (Pb) is the group treated with the combination of 600 mg/kg of food and 9g/rat /day of the plant grass in wheat (Pb-bl). The duration of the treatment is 6 weeks. The results of the biometrics of the organs (thyroid, kidney, testis and epididymis) show no significant difference between the three groups. The dosage of a few parameters and hormonal biochemical shows a decrease in the concentration of the hormone T3 and TSH levels among the group pb alone compared to the control and Pb-Bl. These results have been confirmed by the study of histological slices. A morphological changes represented by a shrinking volume of vesicles with the group treated with Pb alone. A return to the normal state of the structure of the follicles was observed. The concentration in serum testosterone, urea and creatinine was significantly increased among the group treated by Pb only in relation to the control and Pb-Bl. whereas the rate of glucose did not show any significant difference. The histology study of the kidney, testis and epididymal weights show no modification at the group Pb-bl comparing to the control. The parenchyma of the kidney shows a dilation of tubes distal and proximal causing a tubular nephropathy for the batch processed by Pb only. The testicles have marked a destruction or absence of germ cells and the light of some seminiferous are almost empty. Conclusion: The supplementation of the plant Triticum durum has caused a considerable improvement which ensures the return of parameters investigated in the normal state.

Keywords: creatinine, glucose, histological sections, T3, TSH, testosterone

Procedia PDF Downloads 378
564 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 173
563 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction

Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan

Abstract:

Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.

Keywords: decision trees, neural network, myocardial infarction, Data Mining

Procedia PDF Downloads 429
562 Investigation of Aerodynamic and Design Features of Twisting Tall Buildings

Authors: Sinan Bilgen, Bekir Ozer Ay, Nilay Sezer Uzol

Abstract:

After decades of conventional shapes, irregular forms with complex geometries are getting more popular for form generation of tall buildings all over the world. This trend has recently brought out diverse building forms such as twisting tall buildings. This study investigates both the aerodynamic and design features of twisting tall buildings through comparative analyses. Since twisting a tall building give rise to additional complexities related with the form and structural system, lateral load effects become of greater importance on these buildings. The aim of this study is to analyze the inherent characteristics of these iconic forms by comparing the wind loads on twisting tall buildings with those on their prismatic twins. Through a case study research, aerodynamic analyses of an existing twisting tall building and its prismatic counterpart were performed and the results have been compared. The prismatic twin of the original building were generated by removing the progressive rotation of its floors with the same plan area and story height. Performance-based measures under investigation have been evaluated in conjunction with the architectural design. Aerodynamic effects have been analyzed by both wind tunnel tests and computational methods. High frequency base balance tests and pressure measurements on 3D models were performed to evaluate wind load effects on a global and local scale. Comparisons of flat and real surface models were conducted to further evaluate the effects of the twisting form without façade texture contribution. Comparisons highlighted that, the twisting form under investigation shows better aerodynamic behavior both for along wind but particularly for across wind direction. Compared to the prismatic counterpart; twisting model is superior on reducing vortex-shedding dynamic response by disorganizing the wind vortices. Consequently, despite the difficulties arisen from inherent complexity of twisted forms, they could still be feasible and viable with their attractive images in the realm of tall buildings.

Keywords: aerodynamic tests, motivation for twisting, tall buildings, twisted forms, wind excitation

Procedia PDF Downloads 232
561 An Inquiry on Imaging of Soft Tissues in Micro-Computed Tomography

Authors: Matej Patzelt, Jana Mrzilkova, Jan Dudak, Frantisek Krejci, Jan Zemlicka, Zdenek Wurst, Petr Zach, Vladimir Musil

Abstract:

Introduction: Micro-CT is well used for examination of bone structures and teeth. On the other hand visualization of the soft tissues is still limited. The goal of our study was to elaborate methodology for soft tissue samples imaging in micro-CT. Methodology: We used organs of rats and mice. We either did a preparation of the organs and fixation in contrast solution or we did cannulation of blood vessels and their injection for imaging of the vascular system. First, we scanned native specimens, then we created corrosive specimens by resins. In the next step, we injected vascular system either by Aurovist contrast agent or by Exitron. In the next step, we focused on soft tissues contrast increase. We scanned samples fixated in Lugol solution, samples fixated in pure ethanol and in formaldehyde solution. All used methods were afterwards compared. Results: Native specimens did not provide sufficient contrast of the tissues in any of organs. Corrosive samples of the blood stream provided great contrast and details; on the other hand, it was necessary to destroy the organ. Further examined possibility was injection of the AuroVist contrast that leads to the great bloodstream contrast. Injection of Exitron contrast agent comparing to Aurovist did not provide such a great contrast. The soft tissues (kidney, heart, lungs, brain, and liver) were best visualized after fixation in ethanol. This type of fixation showed best results in all studied tissues. Lugol solution had great results in muscle tissue. Fixation by formaldehyde solution showed similar quality of contrast in the tissues like ethanol. Conclusion: Before imaging, we need to, first, determinate which structures of the soft tissues we want to visualize. In the case of the bloodstream, the best was AuroVist and corrosive specimens. Muscle tissue is best visualized by Lugol solution. In the case of the organs containing cavities, like kidneys or brain, the best way was ethanol fixation.

Keywords: experimental imaging, fixation, micro-CT, soft tissues

Procedia PDF Downloads 323
560 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 112
559 Numerical Modelling of Hydrodynamic Drag and Supercavitation Parameters for Supercavitating Torpedoes

Authors: Sezer Kefeli, Sertaç Arslan

Abstract:

In this paper, supercavitationphenomena, and parameters are explained, and hydrodynamic design approaches are investigated for supercavitating torpedoes. In addition, drag force calculation methods ofsupercavitatingvehicles are obtained. Basically, conventional heavyweight torpedoes reach up to ~50 knots by classic hydrodynamic techniques, on the other hand super cavitating torpedoes may reach up to ~200 knots, theoretically. However, in order to reachhigh speeds, hydrodynamic viscous forces have to be reduced or eliminated completely. This necessity is revived the supercavitation phenomena that is implemented to conventional torpedoes. Supercavitation is a type of cavitation, after all, it is more stable and continuous than other cavitation types. The general principle of supercavitation is to separate the underwater vehicle from water phase by surrounding the vehicle with cavitation bubbles. This situation allows the torpedo to operate at high speeds through the water being fully developed cavitation. Conventional torpedoes are entitled as supercavitating torpedoes when the torpedo moves in a cavity envelope due to cavitator in the nose section and solid fuel rocket engine in the rear section. There are two types of supercavitation phase, these are natural and artificial cavitation phases. In this study, natural cavitation is investigated on the disk cavitators based on numerical methods. Once the supercavitation characteristics and drag reduction of natural cavitationare studied on CFD platform, results are verified with the empirical equations. As supercavitation parameters cavitation number (), pressure distribution along axial axes, drag coefficient (C_?) and drag force (D), cavity wall velocity (U_?) and dimensionless cavity shape parameters, which are cavity length (L_?/d_?), cavity diameter(d_ₘ/d_?) and cavity fineness ratio (〖L_?/d〗_ₘ) are investigated and compared with empirical results. This paper has the characteristics of feasibility study to carry out numerical solutions of the supercavitation phenomena comparing with empirical equations.

Keywords: CFD, cavity envelope, high speed underwater vehicles, supercavitating flows, supercavitation, drag reduction, supercavitation parameters

Procedia PDF Downloads 171
558 Modeling of Void Formation in 3D Woven Fabric During Resin Transfer Moulding

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Jan Kočí, Andy Long

Abstract:

Resin transfer molding (RTM) is increasingly used for manufacturing high-quality composite structures due to its additional advantages over prepregs of low-cost out-of-autoclave processing. However, to retain the advantages, it is critical to reduce the void content during the injection. Reinforcements commonly used in RTM, such as woven fabrics, have dual-scale porosity with mesoscale pores between the yarns and the micro-scale pores within the yarns. Due to the fabric geometry and the nature of the dual-scale flow, the flow front during injection creates a complicated fingering formation which leads to void formation. Analytical modeling of void formation for woven fabrics has been widely studied elsewhere. However, there is scope for improvement to the reduction in void formation in 3D fabrics wherein the in-plane yarn layers are confined by additional through-thickness binder yarns. In the present study, the structural morphology of the tortuous pore spaces in the 3D fabric has been studied and implemented using open-source software TexGen. An analytical model for the void and the fingering formation has been implemented based on an idealized unit cell model of the 3D fabric. Since the pore spaces between the yarns are free domains, the region is treated as flow-through connected channels, whereas intra-yarn flow has been modeled using Darcy’s law with an additional term to account for capillary pressure. Later the void fraction has been characterised using the criterion of void formation by comparing the fill time for inter and intra yarn flow. Moreover, the dual-scale two-phase flow of resin with air has been simulated in the commercial CFD solver OpenFOAM/ANSYS to predict the probable location of voids and validate the analytical model. The use of an idealised unit cell model will give the insight to optimise the mesoscale geometry of the reinforcement and injection parameters to minimise the void content during the LCM process.

Keywords: 3D fiber, void formation, RTM, process modelling

Procedia PDF Downloads 95