Search results for: parallel flow
912 A Textile-Based Scaffold for Skin Replacements
Authors: Tim Bolle, Franziska Kreimendahl, Thomas Gries, Stefan Jockenhoevel
Abstract:
The therapeutic treatment of extensive, deep wounds is limited. Autologous split-skin grafts are used as a so-called ‘gold standard’. Most common deficits are the defects at the donor site, the risk of scarring as well as the limited availability and quality of the autologous grafts. The aim of this project is a tissue engineered dermal-epidermal skin replacement to overcome the limitations of the gold standard. A key requirement for the development of such a three-dimensional implant is the formation of a functional capillary-like network inside the implant to ensure a sufficient nutrient and gas supply. Tailored three-dimensional warp knitted spacer fabrics are used to reinforce the mechanically week fibrin gel-based scaffold and further to create a directed in vitro pre-vascularization along the parallel-oriented pile yarns within a co-culture. In this study various three-dimensional warp knitted spacer fabrics were developed in a factorial design to analyze the influence of the machine parameters such as the stitch density and the pattern of the fabric on the scaffold performance and further to determine suitable parameters for a successful fibrin gel-incorporation and a physiological performance of the scaffold. The fabrics were manufactured on a Karl Mayer double-bar raschel machine DR 16 EEC/EAC. A fine machine gauge of E30 was used to ensure a high pile yarn density for sufficient nutrient, gas and waste exchange. In order to ensure a high mechanical stability of the graft, the fabrics were made of biocompatible PVDF yarns. Key parameters such as the pore size, porosity and stress/strain behavior were investigated under standardized, controlled climate conditions. The influence of the input parameters on the mechanical and morphological properties as well as the ability of fibrin gel incorporation into the spacer fabric was analyzed. Subsequently, the pile yarns of the spacer fabrics were colonized with Human Umbilical Vein Endothelial Cells (HUVEC) to analyze the ability of the fabric to further function as a guiding structure for a directed vascularization. The cells were stained with DAPI and investigated using fluorescence microscopy. The analysis revealed that the stitch density and the binding pattern have a strong influence on both the mechanical and morphological properties of the fabric. As expected, the incorporation of the fibrin gel was significantly improved with higher pore sizes and porosities, whereas the mechanical strength decreases. Furthermore, the colonization trials revealed a high cell distribution and density on the pile yarns of the spacer fabrics. For a tailored reinforcing structure, the minimum porosity and pore size needs to be evaluated which still ensures a complete incorporation of the reinforcing structure into the fibrin gel matrix. That will enable a mechanically stable dermal graft with a dense vascular network for a sufficient nutrient and oxygen supply of the cells. The results are promising for subsequent research in the field of reinforcing mechanically weak biological scaffolds and develop functional three-dimensional scaffolds with an oriented pre-vascularization.Keywords: fibrin-gel, skin replacement, spacer fabric, pre-vascularization
Procedia PDF Downloads 256911 A Peg Board with Photo-Reflectors to Detect Peg Insertion and Pull-Out Moments
Authors: Hiroshi Kinoshita, Yasuto Nakanishi, Ryuhei Okuno, Toshio Higashi
Abstract:
Various kinds of pegboards have been developed and used widely in research and clinics of rehabilitation for evaluation and training of patient’s hand function. A common measure in these peg boards is a total time of performance execution assessed by a tester’s stopwatch. Introduction of electrical and automatic measurement technology to the apparatus, on the other hand, has been delayed. The present work introduces the development of a pegboard with an electric sensor to detect moments of individual peg’s insertion and removal. The work also gives fundamental data obtained from a group of healthy young individuals who performed peg transfer tasks using the pegboard developed. Through trails and errors in pilot tests, two 10-hole peg-board boxes installed with a small photo-reflector and a DC amplifier at the bottom of each hole were designed and built by the present authors. The amplified electric analogue signals from the 20 reflectors were automatically digitized at 500 Hz per channel, and stored in a PC. The boxes were set on a test table at different distances (25, 50, 75, and 125 mm) in parallel to examine the effect of hole-to-hole distance. Fifty healthy young volunteers (25 in each gender) as subjects of the study performed successive fast 80 time peg transfers at each distance using their dominant and non-dominant hands. The data gathered showed a clear-cut light interruption/continuation moment by the pegs, allowing accurately (no tester’s error involved) and precisely (an order of milliseconds) to determine the pull out and insertion times of each peg. This further permitted computation of individual peg movement duration (PMD: from peg-lift-off to insertion) apart from hand reaching duration (HRD: from peg insertion to lift-off). An accidental drop of a peg led to an exceptionally long ( < mean + 3 SD) PMD, which was readily detected from an examination of data distribution. The PMD data were commonly right-skewed, suggesting that the median can be a better estimate of individual PMD than the mean. Repeated measures ANOVA using the median values revealed significant hole-to-hole distance, and hand dominance effects, suggesting that these need to be fixed in the accurate evaluation of PMD. The gender effect was non-significant. Performance consistency was also evaluated by the use of quartile variation coefficient values, which revealed no gender, hole-to-hole, and hand dominance effects. The measurement reliability was further examined using interclass correlation obtained from 14 subjects who performed the 25 and 125 mm hole distance tasks at two 7-10 days separate test sessions. Inter-class correlation values between the two tests showed fair reliability for PMD (0.65-0.75), and for HRD (0.77-0.94). We concluded that a sensor peg board developed in the present study could provide accurate (excluding tester’s errors), and precise (at a millisecond rate) time information of peg movement separated from that used for hand movement. It could also easily detect and automatically exclude erroneous execution data from his/her standard data. These would lead to a better evaluation of hand dexterity function compared to the widely used conventional used peg boards.Keywords: hand, dexterity test, peg movement time, performance consistency
Procedia PDF Downloads 132910 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation
Authors: Samuel Ahamefula Mba
Abstract:
Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation
Procedia PDF Downloads 92909 Temperature Dependent Current-Voltage (I-V) Characteristics of CuO-ZnO Nanorods Based Heterojunction Solar Cells
Authors: Venkatesan Annadurai, Kannan Ethirajalu, Anu Roshini Ramakrishnan
Abstract:
Copper oxide (CuO) and zinc oxide (ZnO) based coaxial (CuO-ZnO nanorods) heterojunction has been the interest of various research communities for solar cells, light emitting diodes (LEDs) and photodetectors applications. Copper oxide (CuO) is a p-type material with the band gap of 1.5 eV and it is considered to be an attractive absorber material in solar cells applications due to its high absorption coefficient and long minority carrier diffusion length. Similarly, n-type ZnO nanorods possess many attractive advantages over thin films such as, the light trapping ability and photosensitivity owing to the presence of oxygen related hole-traps at the surface. Moreover, the abundant availability, non-toxicity, and inexpensiveness of these materials make them suitable for potentially cheap, large area, and stable photovoltaic applications. However, the efficiency of the CuO-ZnO nanorods heterojunction based devices is greatly affected by interface defects which generally lead to the poor performance. In spite of having much potential, not much work has been carried out to understand the interface quality and transport mechanism involved across the CuO-ZnO nanorods heterojunction. Therefore, a detailed investigation of CuO-ZnO heterojunction is needed to understand the interface which affects its photovoltaic performance. Herein, we have fabricated the CuO-ZnO nanorods based heterojunction by simple hydrothermal and electrodeposition technique and investigated its interface quality by carrying out temperature (300 –10 K) dependent current-voltage (I-V) measurements under dark and illumination of visible light. Activation energies extracted from the temperature dependent I-V characteristics reveals that recombination and tunneling mechanism across the interfacial barrier plays a significant role in the current flow.Keywords: heterojunction, electrical transport, nanorods, solar cells
Procedia PDF Downloads 223908 Numerical Study of Flapping-Wing Flight of Hummingbird Hawkmoth during Hovering: Longitudinal Dynamics
Authors: Yao Jie, Yeo Khoon Seng
Abstract:
In recent decades, flapping wing aerodynamics has attracted great interest. Understanding the physics of biological flyers such as birds and insects can help improve the performance of micro air vehicles. The present research focuses on the aerodynamics of insect-like flapping wing flight with the approach of numerical computation. Insect model of hawkmoth is adopted in the numerical study with rigid wing assumption currently. The numerical model integrates the computational fluid dynamics of the flow and active control of wing kinematics to achieve stable flight. The computation grid is a hybrid consisting of background Cartesian nodes and clouds of mesh-free grids around immersed boundaries. The generalized finite difference method is used in conjunction with single value decomposition (SVD-GFD) in computational fluid dynamics solver to study the dynamics of a free hovering hummingbird hawkmoth. The longitudinal dynamics of the hovering flight is governed by three control parameters, i.e., wing plane angle, mean positional angle and wing beating frequency. In present work, a PID controller works out the appropriate control parameters with the insect motion as input. The controller is adjusted to acquire desired maneuvering of the insect flight. The numerical scheme in present study is proven to be accurate and stable to simulate the flight of the hummingbird hawkmoth, which has relatively high Reynolds number. The PID controller is responsive to provide feedback to the wing kinematics during the hovering flight. The simulated hovering flight agrees well with the real insect flight. The present numerical study offers a promising route to investigate the free flight aerodynamics of insects, which could overcome some of the limitations of experiments.Keywords: aerodynamics, flight control, computational fluid dynamics (CFD), flapping-wing flight
Procedia PDF Downloads 345907 Effects of Using Clinical Practice Guidelines for Caring for Patients with Severe Sepsis or Septic Shock on Clinical Outcomes Based on the Sepsis Bundle Protocol at the ICU of Songkhla Hospital Thailand
Authors: Pornthip Seangsanga
Abstract:
Sepsis or septic shock needs urgent care because it is a cause of the high mortality rate if patients do not receive timely treatment. Songkhla Hospital does not have a clear system or clinical practice guidelines for treatment of patients with severe sepsis or septic shock, which contributes to the said problem.To compare clinical outcomes based on the protocol after using the clinical guidelines between the Emergency Room, Intensive Care Unit, and the Ward. This quasi-experimental study was conducted on the population and 50 subjects who were diagnosed with severe sepsis or septic shock from December 2013 to May 2014. The data were collected using a nursing care and referring record form for patients with severe sepsis or septic shock at Songkhla Hospital. The record form had been tested for its validity by three experts, and the IOC was 1.The mortality rate in patients with severe sepsis or septic shock who were moved from the ER to the ICU was significantly lower than that of those patients moved from the Ward to the ICU within 48 hours. This was because patients with severe sepsis or septic shock who were moved from the ER to the ICU received more fluid within the first six hours according to the protocol which helped patients to have adequate tissue perfusion within the first six hours, and that helped improve blood flow to the kidneys, and the patients’ urine was found to be with a higher quantity of 0.5 cc/kg/hr, than those patients who were moved from the Ward to the ICU. This study shows that patients with severe sepsis or septic shock need to be treated immediately. Using the clinical practice guidelines along with timely diagnosis and treatment based on the sepsis bundle in giving sufficient and suitable amount of fluid to help improve blood circulation and blood pressure can clearly prevent or reduce severity of complications.Keywords: clinical practice guidelines, caring, septic shock, sepsis bundle protocol
Procedia PDF Downloads 295906 Digital Fashion: An Integrated Approach to Additive Manufacturing in Wearable Fashion
Abstract:
This paper presents a digital fashion production methodology and workflow based on fused deposition modeling additive manufacturing technology, as demonstrated through a 3D printed fashion show held at Southeast University in Nanjing, China. Unlike traditional fashion, 3D printed fashion allows for the creation of complex geometric shapes and unique structural designs, facilitating diverse reconfiguration and sustainable production of textile fabrics. The proposed methodology includes two components: morphogenesis and the 3D printing process. The morphogenesis part comprises digital design methods such as mesh deformation, structural reorganization, particle flow stretching, sheet partitioning, and spreading methods. The 3D printing process section includes three types of methods: sculptural objects, multi-material composite fabric, and self-forming composite fabrics. This paper focuses on multi-material composite fabrics and self-forming composite fabrics, both of which involve weaving fabrics with 3D-printed material sandwiches. Multi-material composite fabrics create specially tailored fabric from the original properties of the printing path and multiple materials, while self-forming fabrics apply pre-stress to the flat fabric and then print the sandwich, allowing the fabric's own elasticity to interact with the printed components and shape into a 3D state. The digital design method and workflow enable the integration of abstract sensual aesthetics and rational thinking, showcasing a digital aesthetic that challenges conventional handicraft workshops. Overall, this paper provides a comprehensive framework for the production of 3D-printed fashion, from concept to final product.Keywords: digital fashion, composite fabric, self-forming structure, additive manufacturing, generating design
Procedia PDF Downloads 121905 CO₂ Absorption Studies Using Amine Solvents with Fourier Transform Infrared Analysis
Authors: Avoseh Funmilola, Osman Khalid, Wayne Nelson, Paramespri Naidoo, Deresh Ramjugernath
Abstract:
The increasing global atmospheric temperature is of great concern and this has led to the development of technologies to reduce the emission of greenhouse gases into the atmosphere. Flue gas emissions from fossil fuel combustion are major sources of greenhouse gases. One of the ways to reduce the emission of CO₂ from flue gases is by post combustion capture process and this can be done by absorbing the gas into suitable chemical solvents before emitting the gas into the atmosphere. Alkanolamines are promising solvents for this capture process. Vapour liquid equilibrium of CO₂-alkanolamine systems is often represented by CO₂ loading and partial pressure of CO₂ without considering the liquid phase. The liquid phase of this system is a complex one comprising of 9 species. Online analysis of the process is important to monitor the concentrations of the liquid phase reacting and product species. Liquid phase analysis of CO₂-diethanolamine (DEA) solution was performed by attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy. A robust Calibration was performed for the CO₂-aqueous DEA system prior to an online monitoring experiment. The partial least square regression method was used for the analysis of the calibration spectra obtained. The models obtained were used for prediction of DEA and CO₂ concentrations in the online monitoring experiment. The experiment was performed with a newly built recirculating experimental set up in the laboratory. The set up consist of a 750 ml equilibrium cell and ATR-FTIR liquid flow cell. Measurements were performed at 400°C. The results obtained indicated that the FTIR spectroscopy combined with Partial least square method is an effective tool for online monitoring of speciation.Keywords: ATR-FTIR, CO₂ capture, online analysis, PLS regression
Procedia PDF Downloads 195904 Design of Low-Emission Catalytically Stabilized Combustion Chamber Concept
Authors: Annapurna Basavaraju, Andreas Marn, Franz Heitmeir
Abstract:
The Advisory Council for Aeronautics Research in Europe (ACARE) is cognizant for the overall reduction of NOx emissions by 80% in its vision 2020. Moreover small turbo engines have higher fuel specific emissions compared to large engines due to their limited combustion chamber size. In order to fulfill these requirements, novel combustion concepts are essential. This motivates to carry out the research on the current state of art, catalytic stabilized combustion chamber using hydrogen in small jet engines which are designed and investigated both numerically and experimentally during this project. Catalytic combustion concepts can also be adopted for low caloric fuels and are therefore not constrained to only hydrogen. However, hydrogen has high heating value and has the major advantage of producing only the nitrogen oxides as pollutants during the combustion, thus eliminating the interest on other emissions such as Carbon monoxides etc. In the present work, the combustion chamber is designed based on the ‘Rich catalytic Lean burn’ concept. The experiments are conducted for the characteristic operating range of an existing engine. This engine has been tested successfully at Institute of Thermal Turbomachinery and Machine Dynamics (ITTM), Technical University Graz. One of the facts that the efficient combustion is a result of proper mixing of fuel-air mixture, considerable significance is given to the selection of appropriate mixer. This led to the design of three diverse configurations of mixers and is investigated experimentally and numerically. Subsequently the best mixer would be equipped in the main combustion chamber and used throughout the experimentation. Furthermore, temperatures and pressures would be recorded at various locations inside the combustion chamber and the exhaust emissions will also be analyzed. The instrumented combustion chamber would be inspected at the engine relevant inlet conditions for nine different sets of catalysts at the Hot Flow Test Facility (HFTF) of the institute.Keywords: catalytic combustion, gas turbine, hydrogen, mixer, NOx emissions
Procedia PDF Downloads 303903 Quantitative Detection of the Conformational Transitions between Open and Closed Forms of Cytochrome P450 Oxidoreductase (CYPOR) at the Membrane Surface in Different Functional States
Authors: Sara Arafeh, Kovriguine Evguine
Abstract:
Cytochromes P450 are enzymes that require a supply of electrons to catalyze the synthesis of steroid hormones, fatty acids, and prostaglandin hormone. Cytochrome P450 Oxidoreductase (CYPOR), a membrane bound enzyme, provides these electrons in its open conformation. CYPOR has two cytosolic domains (FAD domain and FMN domain) and an N-terminal in the membrane. In its open conformation, electrons flow from NADPH, FAD, and finally to FMN where cytochrome P450 picks up these electrons. In the closed conformation, cytochrome P450 does not bind to the FMN domain to take the electrons. It was found that when the cytosolic domains are isolated, CYPOR could not bind to cytochrome P450. This suggested that the membrane environment is important for CYPOR function. This project takes the initiative to better understand the dynamics of CYPOR in its full length. Here, we determine the distance between specific sites in the FAD and FMN binding domains in CYPOR by Forster Resonance Energy Transfer (FRET) and Ultrafast TA spectroscopy with and without NADPH. The approach to determine these distances will rely on labeling these sites with red and infrared fluorophores. Mimic membrane attachment is done by inserting CYPOR in lipid nanodiscs. By determining the distances between the donor-acceptor sites in these domains, we can observe the open/closed conformations upon reducing CYPOR in the presence and absence of cytochrome P450. Such study is important to better understand CYPOR mechanism of action in various endosomal membranes including hepatic CYPOR which is vital in plasma cholesterol homeostasis. By investigating the conformational cycles of CYPOR, we can synthesize drugs that would be more efficient in affecting the steroid hormonal levels and metabolism of toxins catalyzed by Cytochrome P450.Keywords: conformational cycle of CYPOR, cytochrome P450, cytochrome P450 oxidoreductase, FAD domain, FMN domain, FRET, Ultrafast TA Spectroscopy
Procedia PDF Downloads 278902 Stress and Rhythm in the Educated Nigerian Accent of English
Authors: Nkereke M. Essien
Abstract:
The intention of this paper is to examine stress in the Educated Nigerian Accent of English (ENAE) with the aim of analyzing stress and rhythmic patterns of Nigerian English. Our aim also is to isolate differences and similarities in the stress patterns studied and also know what forms the accent of these Educated Nigerian English (ENE) which marks them off from other groups or English’s of the world, to ascertain and characterize it and to provide documented evidence for its existence. Nigerian stress and rhythmic patterns are significantly different from the British English stress and rhythmic patterns consequently, the educated Nigerian English (ENE) features more stressed syllables than the native speakers’ varieties. The excessive stressed of syllables causes a contiguous “Ss” in the rhythmic flow of ENE, and this brings about a “jerky rhythm’ which distorts communication. To ascertain this claim, ten (10) Nigerian speakers who are educated in the English Language were selected by a stratified Random Sampling technique from two Federal Universities in Nigeria. This classification belongs to the education to the educated class or standard variety. Their performance was compared to that of a Briton (control). The Metrical system of analysis was used. The respondents were made to read some words and utterance which was recorded and analyzed perceptually, statistically and acoustically using the one-way Analysis of Variance (ANOVA). The Turky-Kramer Post Hoc test, the Wilcoxon Matched Pairs Signed Ranks test, and the Praat analysis software were used in the analysis. It was revealed from our findings that the Educated Nigerian English speakers feature more stressed syllables in their productions by spending more time in pronouncing stressed syllables and sometimes lesser time in pronouncing the unstressed syllables. Their overall tempo was faster. The ENE speakers used tone to mark prominence while the native speaker used stress to mark pronounce, typified by the control. We concluded that the stress pattern of the ENE speakers was significantly different from the native speaker’s variety represented by the control’s performance.Keywords: accent, Nigerian English, rhythm, stress
Procedia PDF Downloads 238901 ABET Accreditation Process for Engineering and Technology Programs: Detailed Process Flow from Criteria 1 to Criteria 8
Authors: Amit Kumar, Rajdeep Chakrabarty, Ganesh Gupta
Abstract:
This paper illustrates the detailed accreditation process of Accreditation Board of Engineering and Technology (ABET) for accrediting engineering and Technology programs. ABET is a non-governmental agency that accredits engineering and technology, applied and natural sciences, and computing sciences programs. ABET was founded on 10th May 1932 and was founded by Institute of Electrical and Electronics Engineering. International industries accept ABET accredited institutes having the highest standards in their academic programs. In this accreditation, there are eight criteria in general; criterion 1 describes the student outcome evaluations, criteria 2 measures the program's educational objectives, criteria 3 is the student outcome calculated from the marks obtained by students, criteria 4 establishes continuous improvement, criteria 5 focus on curriculum of the institute, criteria 6 is about faculties of this institute, criteria 7 measures the facilities provided by the institute and finally, criteria 8 focus on institutional support towards staff of the institute. In this paper, we focused on the calculative part of each criterion with equations and suitable examples, the files and documentation required for each criterion, and the total workflow of the process. The references and the values used to illustrate the calculations are all taken from the samples provided at ABET's official website. In the final section, we also discuss the criterion-wise score weightage followed by evaluation with timeframe and deadlines.Keywords: Engineering Accreditation Committee, Computing Accreditation Committee, performance indicator, Program Educational Objective, ABET Criterion 1 to 7, IEEE, National Board of Accreditation, MOOCS, Board of Studies, stakeholders, course objective, program outcome, articulation, attainment, CO-PO mapping, CO-PO-SO mapping, PDCA cycle, degree certificates, course files, course catalogue
Procedia PDF Downloads 57900 Economics of Milled Rice Marketing in Gombe Metropolis, Gombe State, Nigeria
Authors: Suleh Yusufu Godi, Ado Makama Adamu
Abstract:
Marketing involves all the legal, physical, and economic services which are necessary in moving products from producer to consumers. The more efficient the marketing functions are performed the better the marketing system for the farmers, marketing agents, and the society at large. Rice marketing ensures the flow of product from producers to consumers in the form, time and place of need. Therefore, this study examined profitability of milled rice marketing in Gombe metropolis, Gombe State. Data were collected using structured questionnaires from ninety randomly selected rice marketers in Gombe metropolis. The data were analyzed using descriptive statistics, farm budget technique and regression analysis. The study revealed the total rice marketing cost incurred by rice marketers to be N6, 610,214.70. This gave an average of N73, 446.83 per marketer and N37.30 per Kilogram of rice. The Gross Income for rice marketers in Gombe metropolis was N15, 064,600.00. This value gave an average of N167, 384.44 per rice marketer or N85.00 per kilogram of rice. The study also revealed net income for all rice marketers to be N8, 454,385.30. This gave an average of N93, 937.61 per rice marketer or N47.70 per Kilogram of rice. The study further revealed a marketing margin, marketing efficiency and return per naira invested on rice marketing to be 39.30%, 150.16% and N0.56, respectively. The result of regression analysis shows that age, sex and cost of transportation are positive and significantly affect marketing margin of rice marketers in Gombe Metropolis. However, the main constraints to rice marketing in Gombe metropolis include inadequate electricity, capital, high transportation cost, instability of prices and low patronage among others. The study recommends provision of adequate electrical power supply in the State especially the State capital and also encouraging rice marketers in Gombe metropolis to form cooperative societies so as to have easy access to credit facilities especially from the formal sources.Keywords: rice marketers, milled rice, cost and return, marketing margin, efficiency, profitability
Procedia PDF Downloads 77899 Depollution of the Pinheiros River in the City of São Paulo: Mapping the Dynamics of Conflicts and Coalitions between Actors in Two Recent Depollution Projects
Authors: Adalberto Gregorio Back
Abstract:
Historically, the Pinheiros River, which crosses the urban area of the largest South American metropolis, the city of São Paulo, has been the subject of several interventions involving different interests and multiple demands, including the implementation of road axes and industrial occupation in the city, following its floodplains. the dilution of sewers; generation of electricity, with the reversal of its waters to the Billings Dam; and urban drainage. These processes, together with the exclusionary and peripheral urban sprawl with high population density in the peripheries, result in difficulties for the collection and treatment of household sewage, which flow into the tributaries and the Pinheiros River itself. In the last 20 years, two separate projects have been undertaken to clean up its waters. The first one between 2001-2011 was the flotation system, aimed at cleaning the river in its own gutter with equipment installed near the Bilings Dam; and, more recently, from 2019 to 2022, the proposal to connect about 74 thousand dwellings to the sewage collection and treatment system, as well as to install treatment plants in the tributaries of Pinheiros where the connection to the system is impracticable, given the irregular occupations. The purpose of this paper is to make a comparative analysis on the dynamics of conflicts, interests and opportunities of coalitions between the actors involved in the two referred projects of pollution of the Pinheiros River. For this, we use the analysis of documents produced by the state government; as well as documents related to the legal disputes that occurred in the first attempt of decontamination involving the sanitation company; the Billings Dam management company interested in power generation; the city hall and regular and irregular dwellings not linked to the sanitation system.Keywords: depollution of the Pinheiros River, interests groups, São Paulo, water energy nexus
Procedia PDF Downloads 105898 p210 BCR-ABL1 CML with CMML Clones: A Rare Presentation
Authors: Mona Vijayaran, Gurleen Oberoi, Sanjay Mishra
Abstract:
Introduction: p190 BCR‐ABL1 in CML is often associated with monocytosis. In the case described here, monocytosis is associated with coexisting p210 BCR‐ABL and CMML clones. Mutation analysis using next‐generation sequence (NGS) in our case showed TET2 and SRSF2 mutations. Aims & Objectives: A 75-year male was evaluated for monocytosis and thrombocytopenia. CBC showed Hb-11.8g/dl, TLC-12,060/cmm, Monocytes-35%, Platelets-39,000/cmm. Materials & Methods: Bone marrow examination showed a hypercellular marrow with myeloid series showing sequential maturation up to neutrophils with 30% monocytes. Immunophenotyping by flow cytometry from bone marrow had 3% blasts. Making chronic myelomonocytic leukemia as the likely diagnosis. NGS for myeloid mutation panel had TET2 (48.9%) and SRSF2 (32.5%) mutations. This report further supported the diagnosis of CMML. To fulfil the WHO diagnostic criteria for CMML, a BCR ABL1 by RQ-PCR was sent. The report came positive for p210 (B3A2, B2A2) Major Transcript (M-BCR) % IS of 38.418. Result: The patient was counselled regarding the unique presentation of the presence of 2 clones- P210 CML and CMML. After discussion with an international faculty with vast experience in CMML. It was decided to start this elderly gentleman on Imatinib 200mg and not on azacytidine, as ASXL1 was not present; hence, his chances of progressing to AML would be less and on the other end, if CML is left untreated then chances of progression to blast phase would always be a possibility. After 3 months on Imatinib his platelet count improved to 80,000 to 90,000/cmm, but his monocytosis persists. His 3rd month BCR-ABL1 IS% is 0.004%. Conclusion: After searching the literature, there were no case reports of a coexisting CML p210 with CMML. This case might be the first case report. p190 BCR ABL1 is often associated with monocytosis. There are few case reports of p210 BCR ABL1 positivity in patients with monocytosis but none with coexisting CMML. This case highlights the need for extensively evaluating patients with monocytosis with next-generation sequencing for myeloid mutation panel and BCR-ABL1 by RT-PCR to correctly diagnose and treat them.Keywords: CMML, NGS, p190 CML, Imatinib
Procedia PDF Downloads 76897 South African Breast Cancer Mutation Spectrum: Pitfalls to Copy Number Variation Detection Using Internationally Designed Multiplex Ligation-Dependent Probe Amplification and Next Generation Sequencing Panels
Authors: Jaco Oosthuizen, Nerina C. Van Der Merwe
Abstract:
The National Health Laboratory Services in Bloemfontien has been the diagnostic testing facility for 1830 patients for familial breast cancer since 1997. From the cohort, 540 were comprehensively screened using High-Resolution Melting Analysis or Next Generation Sequencing for the presence of point mutations and/or indels. Approximately 90% of these patients stil remain undiagnosed as they are BRCA1/2 negative. Multiplex ligation-dependent probe amplification was initially added to screen for copy number variation detection, but with the introduction of next generation sequencing in 2017, was substituted and is currently used as a confirmation assay. The aim was to investigate the viability of utilizing internationally designed copy number variation detection assays based on mostly European/Caucasian genomic data for use within a South African context. The multiplex ligation-dependent probe amplification technique is based on the hybridization and subsequent ligation of multiple probes to a targeted exon. The ligated probes are amplified using conventional polymerase chain reaction, followed by fragment analysis by means of capillary electrophoresis. The experimental design of the assay was performed according to the guidelines of MRC-Holland. For BRCA1 (P002-D1) and BRCA2 (P045-B3), both multiplex assays were validated, and results were confirmed using a secondary probe set for each gene. The next generation sequencing technique is based on target amplification via multiplex polymerase chain reaction, where after the amplicons are sequenced parallel on a semiconductor chip. Amplified read counts are visualized as relative copy numbers to determine the median of the absolute values of all pairwise differences. Various experimental parameters such as DNA quality, quantity, and signal intensity or read depth were verified using positive and negative patients previously tested internationally. DNA quality and quantity proved to be the critical factors during the verification of both assays. The quantity influenced the relative copy number frequency directly whereas the quality of the DNA and its salt concentration influenced denaturation consistency in both assays. Multiplex ligation-dependent probe amplification produced false positives due to ligation failure when ligation was inhibited due to a variant present within the ligation site. Next generation sequencing produced false positives due to read dropout when primer sequences did not meet optimal multiplex binding kinetics due to population variants in the primer binding site. The analytical sensitivity and specificity for the South African population have been proven. Verification resulted in repeatable reactions with regards to the detection of relative copy number differences. Both multiplex ligation-dependent probe amplification and next generation sequencing multiplex panels need to be optimized to accommodate South African polymorphisms present within the genetically diverse ethnic groups to reduce the false copy number variation positive rate and increase performance efficiency.Keywords: familial breast cancer, multiplex ligation-dependent probe amplification, next generation sequencing, South Africa
Procedia PDF Downloads 230896 The Inherent Flaw in the NBA Playoff Structure
Authors: Larry Turkish
Abstract:
Introduction: The NBA is an example of mediocrity and this will be evident in the following paper. The study examines and evaluates the characteristics of the NBA champions. As divisions and playoff teams increase, there is an increase in the probability that the champion originates from the mediocre category. Since it’s inception in 1947, the league has been mediocre and continues to this day. Why does a professional league allow any team with a less than 50% winning percentage into the playoffs? As long as the finances flow into the league, owners will not change the current algorithm. The objective of this paper is to determine if the regular season has meaning in finding an NBA champion. Statistical Analysis: The data originates from the NBA website. The following variables are part of the statistical analysis: Rank, the rank of a team relative to other teams in the league based on the regular season win-loss record; Winning Percentage of a team based on the regular season; Divisions, the number of divisions within the league and Playoff Teams, the number of playoff teams relative to a particular season. The following statistical applications are applied to the data: Pearson Product-Moment Correlation, Analysis of Variance, Factor and Regression analysis. Conclusion: The results indicate that the divisional structure and number of playoff teams results in a negative effect on the winning percentage of playoff teams. It also prevents teams with higher winning percentages from accessing the playoffs. Recommendations: 1. Teams that have a winning percentage greater than 1 standard deviation from the mean from the regular season will have access to playoffs. (Eliminates mediocre teams.) 2. Eliminate Divisions (Eliminates weaker teams from access to playoffs.) 3. Eliminate Conferences (Eliminates weaker teams from access to the playoffs.) 4. Have a balanced regular season schedule, (Reduces the number of regular season games, creates equilibrium, reduces bias) that will reduce the need for load management.Keywords: alignment, mediocrity, regression, z-score
Procedia PDF Downloads 129895 Blood Analysis of Diarrheal Calves Using Portable Blood Analyzer: Analysis of Calves by Age
Authors: Kwangman Park, Jinhee Kang, Suhee Kim, Dohyeon Yu, Kyoungseong Choi, Jinho Park
Abstract:
Statement of the Problem: Diarrhea is a major cause of death in young calves. This causes great economic damage to the livestock industry. These diarrhea cause dehydration, decrease blood flow, lower the pH and degrade enzyme function. In the past, serum screening was not possible in the field. However, now with the spread of portable serum testing devices, it is now possible to conduct tests directly on field. Thus, accurate serological changes can be identified and used in the field of large animals. Methodology and Theoretical Orientation: The test groups were calves from 1 to 44 days old. The status of the feces was divided into four grade to determine the severity of diarrhea (grade 0,1,2,3). Grade 0, 1 is considered to have no diarrhea. Grade 2, 3 is considered to diarrhea positive group. One or more viruses were detected in this group. Diarrhea negasitive group consisted of 57 calves (Asan=30, Samrye=27). Diarrhea positive group consisted of 34 calves (Kimje=27, Geochang=7). The feces of all calves were analyzed by PCR Test. Blood sample was measured using an automatic blood analyzer(i-STAT, Abbott inc. Illinois, US). Calves were divided into 3 groups according to age. Group 1 is 1 to 14 days old. Group 2 is 15 to 28 days old. Group 3 is more than 28 days old. Findings: Diarrhea caused an increase in HCT due to dehydration. The difference from normal was highest in 15 to 28 days old (p < 0.01). At all ages, bicarbonate decreased compared to normal, and therefore pH decreased. Similar to HCT, the largest difference was observed between 15 and 28 days (p < 0.01). The pCO₂ decreases to compensate for the decrease in pH. Conclusion and Significance: At all ages, HCT increases, and bicarbonate, pH, and pCO₂ decrease in diarrhea calves. The calf from 15 days to 28 days shows the most difference from normal. Over 28 days of age, weight gain and homeostasis ability increase, diarrhea is seen in the stool, there are fewer hematologic changes than groups below 28 days of age.Keywords: calves, diarrhea, hematological changes, i-STAT
Procedia PDF Downloads 160894 Alpha Lipoic Acid: An Antioxidant for Infertility
Authors: Chiara Di Tucci, Giulia Galati, Giulia Mattei, Valentina Bonanni, Oriana Capri, Renzo D'Amelio, Ludovico Muzii, Pierluigi Benedetti Panici
Abstract:
Objective: Infertility is an increasingly frequent health condition, which may depend on female or male factors. Oxidative stress (OS), resulting from a disrupted balance between reactive oxygen species (ROS) and protective antioxidants, affects the reproductive lifespan of men and women. In this review, we examine if alpha lipoic acid (ALA), among the oral supplements currently in use, has an evidence-based beneficial role in the context of female and male infertility. Methods: We performed a search from English literature using the PubMed database with the following keywords: 'female infertility', 'male infertility', 'semen', 'sperm', 'sub-fertile man', 'alpha-lipoic acid', ' alpha lipoic acid', 'lipoid acid', 'endometriosis', 'chronic pelvic pain', 'follicular fluid' and 'oocytes'. We included clinical trials, multicentric studies, and reviews. The total number of references found after automatically and manually excluding duplicates was 180. After the primary and secondary screening, 28 articles were selected. Results: The available literature demonstrates the positive effects of ALA in multiple processes, from oocyte maturation (0.87 ± 0.9% of oocyte in MII vs 0.81 ± 3.9%; p < .05) to fertilization, embryo development (57.7% vs 75.7% grade 1 embryo; p < .05) and reproductive outcomes. Its regular administration both in sub-fertile women and men has been shown to reduce pelvic pain in endometriosis (p < .05), regularize menstrual flow and metabolic disorders (p < .01), and improve sperm quality (p < .001). Conclusions: ALA represents a promising new molecule in the field of couple infertility. More clinical studies are needed in order to enhance its use in clinical practice.Keywords: alpha lipoic acid, endometriosis, infertility, male factor, polycystic ovary syndrome
Procedia PDF Downloads 85893 Characteristics of Pyroclastic and Igenous Rocks Mineralogy of Lahat Regency, South Sumatra
Authors: Ridho Widyantama Putra, Endang Wiwik Dyah Hastuti
Abstract:
The study area is located in Lahat Regency, South Sumatra and is part of a 500 m – 2000 m elevated perbukitan barisan zone controlled by the main fault of Sumatra (Semangko Fault), administratively located on S4.08197 - E103.01403 and S4.16786 - E103.07700, the product of Semangko Fault in the form of normal fault flight trending north-southeast, composed of lithologic is a pyroclastic rock, volcanic rock and plutonic rock intrusion. On the Manna and Enggano sheets of volcanic quartenary products are located along perbukitan barisan zone. Petrology types of pyroclastic rocks encountered in the form of welded tuff, tuff lapilli, agglomerate, pyroclastic sandstone, pyroclastic claystone, and lava. Some pyroclastic material containing sulfide minerals (pyrite), the type of sedimentation flow with different grain size from ash to lapilli. The present of tuff lapilli covers almost 50% of the total research area, through observation petrography encountered minerals in the form of glass, quartz, palgioklas, and biotite. Lava in this area has been altered characterized by the presence of minerals such as chlorite and secondary biotite, this change is caused by the structure that develops in the hilly zone and is proved by the presence of secondary structures in the form of stocky and normal faults as well as the primary structure of columnar joint, From medial facies to distal facies, the division of facies is divided based on geomorphological observations and dominant types of lithology.Keywords: tuff lapili, pyroclastic, mineral, petrography, volcanic, lava
Procedia PDF Downloads 159892 Quality Analysis of Vegetables Through Image Processing
Authors: Abdul Khalique Baloch, Ali Okatan
Abstract:
The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria
Procedia PDF Downloads 68891 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine
Authors: D. Madhushanka, Y. Liu, H. C. Fernando
Abstract:
Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2
Procedia PDF Downloads 233890 Rapid Degradation of High-Concentration Methylene Blue in the Combined System of Plasma-Enhanced Photocatalysis Using TiO₂-Carbon
Authors: Teguh Endah Saraswati, Kusumandari Kusumandari, Candra Purnawan, Annisa Dinan Ghaisani, Aufara Mahayum
Abstract:
The present study aims to investigate the degradation of methylene blue (MB) using TiO₂-carbon (TiO₂-C) photocatalyst combined with dielectric discharge (DBD) plasma. The carbon materials used in the photocatalyst were activated carbon and graphite. The thin layer of TiO₂-C photocatalyst was prepared by ball milling method which was then deposited on the plastic sheet. The characteristic of TiO₂-C thin layer was analyzed using X-ray diffraction (XRD), scanning electron microscopy (SEM) with energy dispersive X-ray (EDX) spectroscopy, and UV-Vis diffuse reflectance spectrophotometer. The XRD diffractogram patterns of TiO₂-G thin layer in various weight compositions of 50:1, 50:3, and 50:5 show the 2θ peaks found around 25° and 27° are the main characteristic of TiO₂ and carbon. SEM analysis shows spherical and regular morphology of the photocatalyst. Analysis using UV-Vis diffuse reflectance shows TiO₂-C has narrower band gap energy. The DBD plasma reactor was generated using two electrodes of Cu tape connected with stainless steel mesh and Fe wire separated by a glass dielectric insulator, supplied by a high voltage 5 kV with an air flow rate of 1 L/min. The optimization of the weight composition of TiO₂-C thin layer was studied based on the highest reduction of the MB concentration achieved, examined by UV-Vis spectrophotometer. The changes in pH values and color of MB indicated the success of MB degradation. Moreover, the degradation efficiency of MB was also studied in various higher concentrations of 50, 100, 200, 300 ppm treated for 0, 2, 4, 6, 8, 10 min. The degradation efficiency of MB treated in combination system of photocatalysis and DBD plasma reached more than 99% in 6 min, in which the greater concentration of methylene blue dye, the lower degradation rate of methylene blue dye would be achieved.Keywords: activated carbon, DBD plasma, graphite, methylene blue, photocatalysis
Procedia PDF Downloads 122889 Collapsed World Heritage Site: Supply Chain Effect: Case Study of Monument in Kathmandu Valley after the Devastating Earthquake in Nepal
Authors: Rajaram Mahat, Roshan Khadka
Abstract:
Nepal has remained a land of diverse people and culture consisting more than hundred ethnic and caste groups with 92 different languages. Each ethnic and cast group have their own common culture. Kathmandu, the capital city of Nepal is one of the multi-ethnic, lingual and cultural ancient places. Dozens of monuments with the history of more than thousand years are located in Kathmandu Valley. More or less all of the heritage site have been affected by devastating earthquake in April and May 2015. This study shows the most popular tourist and pilgrim’s destination like Kathmandu Darbar Square, Bhaktapur Darbarsquare, Patan Darbar Square, Swayambhunath temple complex, Dharahara Tower, Pasupatinath Hindu Religious Complex etc. have been massively destroyed. This paper analyses the socio economic consequence to the community people of world heritage site after devastating earthquake in Kathmandu Valley. Initial findings indicate that domestic and international current tourists flow have decreased by 41% and average 23% of local craft shop, curio shop, hotel, restaurant, grocery store, footpath shop including employment of tourist guide have been closed down as well as travel & tour business has decreased by 12%. Supply chain effect is noticeably shown in particular collapsed world heritage sites. It has also seen negative impact to National economy as well. This study has recommended to government of Nepal and other donor to reconstruct the collapse world heritage sites and to preserve the other existing world heritage site with treatment of earthquake resist structure as soon as possible.Keywords: world heritage, community, earthquake, supply chain effect
Procedia PDF Downloads 254888 Temporal Changes of Heterogeneous Subpopulations of Human Adipose-Derived Stromal/Stem Cells in vitro
Authors: Qiuyue Peng, Vladimir Zachar
Abstract:
The application of adipose-derived stromal/stem cells (ASCs) in regenerative medicine is gaining more awareness due to their advanced translational potential and abundant source preparations. However, ASC-based translation has been confounded by high subpopulation heterogeneity, causing ambiguity about its precise therapeutic value. Some phenotypes defined by a unique combination of positive and negative surface markers have been found beneficial to the required roles. Therefore, the immunophenotypic repertoires of cultured ASCs and temporal changes of distinct subsets were investigated in this study. ASCs from three donors undergoing cosmetic liposuction were cultured in standard culturing methods, and the co-expression patterns based on the combination of selected markers at passages 1, 4, and 8 were analyzed by multi-chromatic flow cytometry. The results showed that the level of heterogeneity of subpopulations of ASCs became lower by in vitro expansion. After a few passages, most of the CD166⁺/CD274⁺/CD271⁺ based subpopulations converged to CD166 single positive cells. Meanwhile, these CD29⁺CD201⁺ double-positive cells, in combination with CD36/Stro-1 expression or without, feathered only the major epitopes and maintained prevailing throughout the whole process. This study suggested that, upon in vitro expansion, the phenotype repertoire of ASCs redistributed and stabilized in a way that cells co-expressing exclusively the strong markers remained dominant. These preliminary findings provide a general overview of the distribution of heterogeneous subsets residents within human ASCs during expansion in vitro. It is a critical step to fully characterize ASCs before clinical application, although the biological effects of heterogeneous subpopulations still need to be clarified.Keywords: adipose-derived stromal/stem cells, heterogeneity, immunophenotype, subpopulations
Procedia PDF Downloads 108887 Investigations of Bergy Bits and Ship Interactions in Extreme Waves Using Smoothed Particle Hydrodynamics
Authors: Mohammed Islam, Jungyong Wang, Dong Cheol Seo
Abstract:
The Smoothed Particle Hydrodynamics (SPH) method is a novel, meshless, and Lagrangian technique based numerical method that has shown promises to accurately predict the hydrodynamics of water and structure interactions in violent flow conditions. The main goal of this study is to build confidence on the versatility of the Smoothed Particle Hydrodynamics (SPH) based tool, to use it as a complementary tool to the physical model testing capabilities and support research need for the performance evaluation of ships and offshore platforms exposed to an extreme and harsh environment. In the current endeavor, an open-sourced SPH-based tool was used and validated for modeling and predictions of the hydrodynamic interactions of a 6-DOF ship and bergy bits. The study involved the modeling of a modern generic drillship and simplified bergy bits in floating and towing scenarios and in regular and irregular wave conditions. The predictions were validated using the model-scale measurements on a moored ship towed at multiple oblique angles approaching a floating bergy bit in waves. Overall, this study results in a thorough comparison between the model scale measurements and the prediction outcomes from the SPH tool for performance and accuracy. The SPH predicted ship motions and forces were primarily within ±5% of the measurements. The velocity and pressure distribution and wave characteristics over the free surface depicts realistic interactions of the wave, ship, and the bergy bit. This work identifies and presents several challenges in preparing the input file, particularly while defining the mass properties of complex geometry, the computational requirements, and the post-processing of the outcomes.Keywords: SPH, ship and bergy bit, hydrodynamic interactions, model validation, physical model testing
Procedia PDF Downloads 130886 Analysis of Travel Behavior Patterns of Frequent Passengers after the Section Shutdown of Urban Rail Transit - Taking the Huaqiao Section of Shanghai Metro Line 11 Shutdown During the COVID-19 Epidemic as an Example
Authors: Hongyun Li, Zhibin Jiang
Abstract:
The travel of passengers in the urban rail transit network is influenced by changes in network structure and operational status, and the response of individual travel preferences to these changes also varies. Firstly, the influence of the suspension of urban rail transit line sections on passenger travel along the line is analyzed. Secondly, passenger travel trajectories containing multi-dimensional semantics are described based on network UD data. Next, passenger panel data based on spatio-temporal sequences is constructed to achieve frequent passenger clustering. Then, the Graph Convolutional Network (GCN) is used to model and identify the changes in travel modes of different types of frequent passengers. Finally, taking Shanghai Metro Line 11 as an example, the travel behavior patterns of frequent passengers after the Huaqiao section shutdown during the COVID-19 epidemic are analyzed. The results showed that after the section shutdown, most passengers would transfer to the nearest Anting station for boarding, while some passengers would transfer to other stations for boarding or cancel their travels directly. Among the passengers who transferred to Anting station for boarding, most of passengers maintained the original normalized travel mode, a small number of passengers waited for a few days before transferring to Anting station for boarding, and only a few number of passengers stopped traveling at Anting station or transferred to other stations after a few days of boarding on Anting station. The results can provide a basis for understanding urban rail transit passenger travel patterns and improving the accuracy of passenger flow prediction in abnormal operation scenarios.Keywords: urban rail transit, section shutdown, frequent passenger, travel behavior pattern
Procedia PDF Downloads 84885 A Review on Applications of Evolutionary Algorithms to Reservoir Operation for Hydropower Production
Authors: Nkechi Neboh, Josiah Adeyemo, Abimbola Enitan, Oludayo Olugbara
Abstract:
Evolutionary algorithms are techniques extensively used in the planning and management of water resources and systems. It is useful in finding optimal solutions to water resources problems considering the complexities involved in the analysis. River basin management is an essential area that involves the management of upstream, river inflow and outflow including downstream aspects of a reservoir. Water as a scarce resource is needed by human and the environment for survival and its management involve a lot of complexities. Management of this scarce resource is necessary for proper distribution to competing users in a river basin. This presents a lot of complexities involving many constraints and conflicting objectives. Evolutionary algorithms are very useful in solving this kind of complex problems with ease. Evolutionary algorithms are easy to use, fast and robust with many other advantages. Many applications of evolutionary algorithms, which are population based search algorithm, are discussed. Different methodologies involved in the modeling and simulation of water management problems in river basins are explained. It was found from this work that different evolutionary algorithms are suitable for different problems. Therefore, appropriate algorithms are suggested for different methodologies and applications based on results of previous studies reviewed. It is concluded that evolutionary algorithms, with wide applications in water resources management, are viable and easy algorithms for most of the applications. The results suggested that evolutionary algorithms, applied in the right application areas, can suggest superior solutions for river basin management especially in reservoir operations, irrigation planning and management, stream flow forecasting and real-time applications. The future directions in this work are suggested. This study will assist decision makers and stakeholders on the best evolutionary algorithm to use in varied optimization issues in water resources management.Keywords: evolutionary algorithm, multi-objective, reservoir operation, river basin management
Procedia PDF Downloads 490884 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison
Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes
Abstract:
Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution
Procedia PDF Downloads 214883 Online Language Tandem: Focusing on Intercultural Communication Competence and Non-Verbal Cues
Authors: Amira Benabdelkader
Abstract:
Communication presents the channel by which humankind create and maintain their relationship with others, express themselves, exchange information, learn and teach etc. The context of communication plays a distinctive role in deciding about the language to be used. The term context is mainly used to refer to the interlocutors, their cultures, languages, relationship, physical surrounding that is the communication setting, type of the information to be transmitted, the topic etc. Cultures, on one hand, impose on humans certain behaviours, attitudes, gestures and beliefs. On the other hand, the focus on language is inevitable as it is with its verbal and non-verbal components, a key tool in and for communication. Moreover, each language has its particularity in how people voice, address and express their thoughts, feelings and beliefs. Being in the same setting with people from different cultures and languages and having conversations with them would call upon the intercultural communicative competence. This latter would promote the success of their conversations. Additionally, this competence could manifest in several ways during their interactions, to the extent that no one can predict when and how the interlocutors would use it. The only thing probably that could be confirmed is that the setting and culture would in a way or another intervene and often shape the flow of their communication, if not the whole communication. Therefore, this paper will look at the intercultural communicative competence of language learners when introducing their cultures to each other in an online language tandem (henceforth OLT) using their second and/or foreign language with the L1 language speakers. The participants of this study are Algerian (use L2: French, FL: English), British (L1: English, L2/FL: French). In other words, this current paper will provide a qualitative analysis of the OLT experiment by emphasising how language learners can overcome the cultural differences in an intercultural setting while communicating online using Skype (video conversations) with people from different countries, cultures and L1. The non-verbal cues will have the lion share in the analysis by focusing on how they have been used to maintain this intercultural communication or hinder it through the misinterpretation of gestures, head movements, grimaces etc.Keywords: intercultural communicative competence, non-verbal cues, online language tandem, Skype
Procedia PDF Downloads 280