Search results for: numerical range
1624 Boussinesq Model for Dam-Break Flow Analysis
Authors: Najibullah M, Soumendra Nath Kuiry
Abstract:
Dams and reservoirs are perceived for their estimable alms to irrigation, water supply, flood control, electricity generation, etc. which civilize the prosperity and wealth of society across the world. Meantime the dam breach could cause devastating flood that can threat to the human lives and properties. Failures of large dams remain fortunately very seldom events. Nevertheless, a number of occurrences have been recorded in the world, corresponding in an average to one to two failures worldwide every year. Some of those accidents have caused catastrophic consequences. So it is decisive to predict the dam break flow for emergency planning and preparedness, as it poses high risk to life and property. To mitigate the adverse impact of dam break, modeling is necessary to gain a good understanding of the temporal and spatial evolution of the dam-break floods. This study will mainly deal with one-dimensional (1D) dam break modeling. Less commonly used in the hydraulic research community, another possible option for modeling the rapidly varied dam-break flows is the extended Boussinesq equations (BEs), which can describe the dynamics of short waves with a reasonable accuracy. Unlike the Shallow Water Equations (SWEs), the BEs taken into account the wave dispersion and non-hydrostatic pressure distribution. To capture the dam-break oscillations accurately it is very much needed of at least fourth-order accurate numerical scheme to discretize the third-order dispersion terms present in the extended BEs. The scope of this work is therefore to develop an 1D fourth-order accurate in both space and time Boussinesq model for dam-break flow analysis by using finite-volume / finite difference scheme. The spatial discretization of the flux and dispersion terms achieved through a combination of finite-volume and finite difference approximations. The flux term, was solved using a finite-volume discretization whereas the bed source and dispersion term, were discretized using centered finite-difference scheme. Time integration achieved in two stages, namely the third-order Adams Basforth predictor stage and the fourth-order Adams Moulton corrector stage. Implementation of the 1D Boussinesq model done using PYTHON 2.7.5. Evaluation of the performance of the developed model predicted as compared with the volume of fluid (VOF) based commercial model ANSYS-CFX. The developed model is used to analyze the risk of cascading dam failures similar to the Panshet dam failure in 1961 that took place in Pune, India. Nevertheless, this model can be used to predict wave overtopping accurately compared to shallow water models for designing coastal protection structures.Keywords: Boussinesq equation, Coastal protection, Dam-break flow, One-dimensional model
Procedia PDF Downloads 2321623 Interference of Mild Drought Stress on Estimation of Nitrogen Status in Winter Wheat by Some Vegetation Indices
Authors: H. Tavakoli, S. S. Mohtasebi, R. Alimardani, R. Gebbers
Abstract:
Nitrogen (N) is one of the most important agricultural inputs affecting crop growth, yield and quality in rain-fed cereal production. N demand of crops varies spatially across fields due to spatial differences in soil conditions. In addition, the response of a crop to the fertilizer applications is heavily reliant on plant available water. Matching N supply to water availability is thus essential to achieve an optimal crop response. The objective of this study was to determine effect of drought stress on estimation of nitrogen status of winter wheat by some vegetation indices. During the 2012 growing season, a field experiment was conducted at the Bundessortenamt (German Plant Variety Office) Marquardt experimental station which is located in the village of Marquardt about 5 km northwest of Potsdam, Germany (52°27' N, 12°57' E). The experiment was designed as a randomized split block design with two replications. Treatments consisted of four N fertilization rates (0, 60, 120 and 240 kg N ha-1, in total) and two water regimes (irrigated (Irr) and non-irrigated (NIrr)) in total of 16 plots with dimension of 4.5 × 9.0 m. The indices were calculated using readings of a spectroradiometer made of tec5 components. The main parts were two “Zeiss MMS1 nir enh” diode-array sensors with a nominal rage of 300 to 1150 nm with less than 10 nm resolutions and an effective range of 400 to 1000 nm. The following vegetation indices were calculated: NDVI, GNDVI, SR, MSR, NDRE, RDVI, REIP, SAVI, OSAVI, MSAVI, and PRI. All the experiments were conducted during the growing season in different plant growth stages including: stem elongation (BBCH=32-41), booting stage (BBCH=43), inflorescence emergence, heading (BBCH=56-58), flowering (BBCH=65-69), and development of fruit (BBCH=71). According to the results obtained, among the indices, NDRE and REIP were less affected by drought stress and can provide reliable wheat nitrogen status information, regardless of water status of the plant. They also showed strong relations with nitrogen status of winter wheat.Keywords: nitrogen status, drought stress, vegetation indices, precision agriculture
Procedia PDF Downloads 3191622 Evaluation of Microbiological Quality and Safety of Two Types of Salads Prepared at Libyan Airline Catering Center in Tripoli
Authors: Elham A. Kwildi, Yahia S. Abugnah, Nuri S. Madi
Abstract:
This study was designed to evaluate the microbiological quality and safety of two types of salads prepared at a catering center affiliated with Libyan Airlines in Tripoli, Libya. Two hundred and twenty-one (221) samples (132 economy-class and 89 first- class) were used in this project which lasted for ten months. Biweekly, microbiological tests were performed which included total plate count (TPC) and total coliforms (TCF), in addition to enumeration and/or detection of some pathogenic bacteria mainly Escherichia coli, Staphylococcus aureus, Bacillus cereus, Salmonella sp, Listeria sp and Vibrio parahaemolyticus parahaemolyticus, By using conventional as well as compact dry methods. Results indicated that TPC of type 1 salad ranged between (<10 – 62 x 103 cfu/gm) and (<10 to 36 x103 cfu/g), while TCF were (<10 – 41 x 103 cfu/gm) and (< 10 to 66 x102 cfu/g) using both methods of detection respectively. On the other hand, TPC of type 2 salad were: (1 × 10 – 52 x 103) and (<10 – 55 x 103 cfu/gm) and in the range of (1 x10 to 45x103 cfu/g), and the (TCF) counts were between (< 10 to 55x103 cfu/g) and (< 10 to 34 x103 cfu/g) using the 1st and the 2nd methods of detection respectively. Also, the pathogens mentioned above were detected in both types of salads, but their levels varied according to the type of salad and the method of detection. The level of Staphylococcus aureus, for instance, was 17.4% using conventional method versus 14.4% using the compact dry method. Similarly, E. coli was 7.6% and 9.8%, while Salmonella sp. recorded the least percentage i.e. 3% and 3.8% with the two mentioned methods respectively. First class salads were also found to contain the same pathogens, but the level of E. coli was relatively higher in this case (14.6% and 16.9%) using conventional and compact dry methods respectively. The second rank came Staphylococcus aureus (13.5%) and (11.2%), followed by Salmonella (6.74%) and 6.70%). The least percentage was for Vibrio parahaemolyticus (4.9%) which was detected in the first class salads only. The other two pathogens Bacillus cereus and Listeria sp. were not detected in either one of the salads. Finally, it is worth mentioning that there was a significant decline in TPC and TCF counts in addition to the disappearance of pathogenic bacteria after the 6-7th month of the study which coincided with the first trial of the HACCP system at the center. The ups and downs in the counts along the early stages of the study reveal that there is a need for some important correction measures including more emphasis on training of the personnel in applying the HACCP system effectively.Keywords: air travel, vegetable salads, foodborne outbreaks, Libya
Procedia PDF Downloads 3261621 A Framework Based Blockchain for the Development of a Social Economy Platform
Authors: Hasna Elalaoui Elabdallaoui, Abdelaziz Elfazziki, Mohamed Sadgal
Abstract:
Outlines: The social economy is a moral approach to solidarity applied to the projects’ development. To reconcile economic activity and social equity, crowdfunding is as an alternative means of financing social projects. Several collaborative blockchain platforms exist. It eliminates the need for a central authority or an inconsiderate middleman. Also, the costs for a successful crowdfunding campaign are reduced, since there is no commission to be paid to the intermediary. It improves the transparency of record keeping and delegates authority to authorities who may be prone to corruption. Objectives: The objectives are: to define a software infrastructure for projects’ participatory financing within a social and solidarity economy, allowing transparent, secure, and fair management and to have a financial mechanism that improves financial inclusion. Methodology: The proposed methodology is: crowdfunding platforms literature review, financing mechanisms literature review, requirements analysis and project definition, a business plan, Platform development process and implementation technology, and testing an MVP. Contributions: The solution consists of proposing a new approach to crowdfunding based on Islamic financing, which is the principle of Mousharaka inspired by Islamic financing, which presents a financial innovation that integrates ethics and the social dimension into contemporary banking practices. Conclusion: Crowdfunding platforms need to secure projects and allow only quality projects but also offer a wide range of options to funders. Thus, a framework based on blockchain technology and Islamic financing is proposed to manage this arbitration between quality and quantity of options. The proposed financing system, "Musharaka", is a mode of financing that prohibits interests and uncertainties. The implementation is offered on the secure Ethereum platform as investors sign and initiate transactions for contributions using their digital signature wallet managed by a cryptography algorithm and smart contracts. Our proposal is illustrated by a crop irrigation project in the Marrakech region.Keywords: social economy, Musharaka, blockchain, smart contract, crowdfunding
Procedia PDF Downloads 771620 Performance Improvement of a Single-Flash Geothermal Power Plant Design in Iran: Combining with Gas Turbines and CHP Systems
Authors: Morteza Sharifhasan, Davoud Hosseini, Mohammad. R. Salimpour
Abstract:
The geothermal energy is considered as a worldwide important renewable energy in recent years due to rising environmental pollution concerns. Low- and medium-grade geothermal heat (< 200 ºC) is commonly employed for space heating and in domestic hot water supply. However, there is also much interest in converting the abundant low- and medium-grade geothermal heat into electrical power. The Iranian Ministry of Power - through the Iran Renewable Energy Organization (SUNA) – is going to build the first Geothermal Power Plant (GPP) in Iran in the Sabalan area in the Northwest of Iran. This project is a 5.5 MWe single flash steam condensing power plant. The efficiency of GPPs is low due to the relatively low pressure and temperature of the saturated steam. In addition to GPPs, Gas Turbines (GTs) are also known by their relatively low efficiency. The Iran ministry of Power is trying to increase the efficiency of these GTs by adding bottoming steam cycles to the GT to form what is known as combined gas/steam cycle. One of the most effective methods for increasing the efficiency is combined heat and power (CHP). This paper investigates the feasibility of superheating the saturated steam that enters the steam turbine of the Sabalan GPP (SGPP-1) to improve the energy efficiency and power output of the GPP. This purpose is achieved by combining the GPP with two 3.5 MWe GTs. In this method, the hot gases leaving GTs are utilized through a superheater similar to that used in the heat recovery steam generator of combined gas/steam cycle. Moreover, brine separated in the separator, hot gases leaving GTs and superheater are used for the supply of domestic hot water (in this paper, the cycle combined of GTs and CHP systems is named the modified SGPP-1) . In this research, based on the Heat Balance presented in the basic design documents of the SGPP-1, mathematical/numerical model of the power plant are developed together with the mentioned GTs and CHP systems. Based on the required hot water, the amount of hot gasses needed to pass through CHP section directly can be adjusted. For example, during summer when hot water is less required, the hot gases leaving both GTs pass through the superheater and CHP systems respectively. On the contrary, in order to supply the required hot water during the winter, the hot gases of one of the GTs enter the CHP section directly, without passing through the super heater section. The results show that there is an increase in thermal efficiency up to 40% through using the modified SGPP-1. Since the gross efficiency of SGPP-1 is 9.6%, the achieved increase in thermal efficiency is significant. The power output of SGPP-1 is increased up to 40% in summer (from 5.5MW to 7.7 MW) while the GTs power output remains almost unchanged. Meanwhile, the combined-cycle power output increases from the power output of the two separate plants of 12.5 MW [5.5+ (2×3.5)] to the combined-cycle power output of 14.7 [7.7+(2×3.5)]. This output is more than 17% above the output of the two separate plants. The modified SGPP-1 is capable of producing 215 T/Hr hot water ( 90 ºC ) for domestic use in the winter months.Keywords: combined cycle, chp, efficiency, gas turbine, geothermal power plant, gas turbine, power output
Procedia PDF Downloads 3221619 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction
Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach
Abstract:
X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast
Procedia PDF Downloads 2581618 Data Analysis for Taxonomy Prediction and Annotation of 16S rRNA Gene Sequences from Metagenome Data
Authors: Suchithra V., Shreedhanya, Kavya Menon, Vidya Niranjan
Abstract:
Skin metagenomics has a wide range of applications with direct relevance to the health of the organism. It gives us insight to the diverse community of microorganisms (the microbiome) harbored on the skin. In the recent years, it has become increasingly apparent that the interaction between skin microbiome and the human body plays a prominent role in immune system development, cancer development, disease pathology, and many other biological implications. Next Generation Sequencing has led to faster and better understanding of environmental organisms and their mutual interactions. This project is studying the human skin microbiome of different individuals having varied skin conditions. Bacterial 16S rRNA data of skin microbiome is downloaded from SRA toolkit provided by NCBI to perform metagenomics analysis. Twelve samples are selected with two controls, and 3 different categories, i.e., sex (male/female), skin type (moist/intermittently moist/sebaceous) and occlusion (occluded/intermittently occluded/exposed). Quality of the data is increased using Cutadapt, and its analysis is done using FastQC. USearch, a tool used to analyze an NGS data, provides a suitable platform to obtain taxonomy classification and abundance of bacteria from the metagenome data. The statistical tool used for analyzing the USearch result is METAGENassist. The results revealed that the top three abundant organisms found were: Prevotella, Corynebacterium, and Anaerococcus. Prevotella is known to be an infectious bacterium found on wound, tooth cavity, etc. Corynebacterium and Anaerococcus are opportunist bacteria responsible for skin odor. This result infers that Prevotella thrives easily in sebaceous skin conditions. Therefore it is better to undergo intermittently occluded treatment such as applying ointments, creams, etc. to treat wound for sebaceous skin type. Exposing the wound should be avoided as it leads to an increase in Prevotella abundance. Moist skin type individuals can opt for occluded or intermittently occluded treatment as they have shown to decrease the abundance of bacteria during treatment.Keywords: bacterial 16S rRNA , next generation sequencing, skin metagenomics, skin microbiome, taxonomy
Procedia PDF Downloads 1721617 Application and Utility of the Rale Score for Assessment of Clinical Severity in Covid-19 Patients
Authors: Naridchaya Aberdour, Joanna Kao, Anne Miller, Timothy Shore, Richard Maher, Zhixin Liu
Abstract:
Background: COVID-19 has and continues to be a strain on healthcare globally, with the number of patients requiring hospitalization exceeding the level of medical support available in many countries. As chest x-rays are the primary respiratory radiological investigation, the Radiological Assessment of Lung Edema (RALE) score was used to quantify the extent of pulmonary infection on baseline imaging. Assessment of RALE score's reproducibility and associations with clinical outcome parameters were then evaluated to determine implications for patient management and prognosis. Methods: A retrospective study was performed with the inclusion of patients testing positive for COVID-19 on nasopharyngeal swab within a single Local Health District in Sydney, Australia and baseline x-ray imaging acquired between January to June 2020. Two independent Radiologists viewed the studies and calculated the RALE scores. Clinical outcome parameters were collected and statistical analysis was performed to assess RALE score reproducibility and possible associations with clinical outcomes. Results: A total of 78 patients met inclusion criteria with the age range of 4 to 91 years old. RALE score concordance between the two independent Radiologists was excellent (interclass correlation coefficient = 0.93, 95% CI = 0.88-0.95, p<0.005). Binomial logistics regression identified a positive correlation with hospital admission (1.87 OR, 95% CI= 1.3-2.6, p<0.005), oxygen requirement (1.48 OR, 95% CI= 1.2-1.8, p<0.005) and invasive ventilation (1.2 OR, 95% CI= 1.0-1.3, p<0.005) for each 1-point increase in RALE score. For each one year increased in age, there was a negative correlation with recovery (0.05 OR, 95% CI= 0.92-1.0, p<0.01). RALE scores above three were positively associated with hospitalization (Youden Index 0.61, sensitivity 0.73, specificity 0.89) and above six were positively associated with ICU admission (Youden Index 0.67, sensitivity 0.91, specificity 0.78). Conclusion: The RALE score can be used as a surrogate to quantify the extent of COVID-19 infection and has an excellent inter-observer agreement. The RALE score could be used to prognosticate and identify patients at high risk of deterioration. Threshold values may also be applied to predict the likelihood of hospital and ICU admission.Keywords: chest radiography, coronavirus, COVID-19, RALE score
Procedia PDF Downloads 1781616 An Integrated Framework for Wind-Wave Study in Lakes
Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung
Abstract:
The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.Keywords: wave modelling, wind-wave, extreme value analysis, marina
Procedia PDF Downloads 841615 Still Hepatocellular Carcinoma Risk Despite Proper Treatment of Chronic Viral Hepatitis
Authors: Sila Akhan, Muge Toygar, Murat Sayan, Simge Fidan
Abstract:
Chronic viral hepatitis B, C, and D can cause hepatocellular carcinoma (HCC), cirrhosis and death. The proper treatment reduce the risk of development of HCC importantly, but not to zero point. Materials and Methods: We analysed retrospectively our chronic viral hepatitis B, C and D patients who attended to our Infectious Diseases policlinic between 2004-2018. From 589 biopsy-proven chronic hepatitis patients 3 have hepatocellular carcinoma on our follow up. First case is 74 years old patient. His HCV infection diagnosis was made 8 years ago. First treatment was pegylated interferon plus ribavirin only 28 weeks, because of HCV RNA breakthrough under treatment. In 2013 he was retreated with telaprevir, pegylated interferon plus ribavirin 24 weeks. But at the end of the therapy HCV RNA was found 1.290.000 IU/mL. He has abdominal ultrasonography (US) controls and alpha-fetoprotein (AFP) at 6 months intervals. All seemed normal until 2015 then he has an abdominal magnetic resonance imaging (MRI) and found HCC by chance. His treatment began in Oncology Clinic after verified with biopsy of HCC. And then sofosbuvir/ledipasvir was given to him for HCV 24 weeks. Sustained virologic response (SVR) was obtained. He is on cure for HCV infection and under control of Oncology for HCC. Second patient is 36 years old man. He knows his HBV infection since 2008. HBsAg and HBeAg positive; HDV RNA negative. Liver biopsy revealed grade:4, stage 3-4 according modified Knodell scoring system. In 2010 tenofovir treatment was began. His abdominal US and AFP were normal. His controls took place at 6 months intervals and HBV DNA negative, US, and AFP were normal until 2016 continuously. AFP found 37 above the normal range and then HCC was found in MRI. Third patient is 57 years old man. As hepatitis B infection was first diagnosed; he has cirrhosis and was began tenofovir as treatment. In short time he has HCC despite normal AFP values. Conclusion: In Mediterranian countries including Turkey naturally occurring pre-S/S variants are more than 75% of all chronic hepatitis B patients. This variants may contribute to the development of progressive liver damage and hepatocarcinogenesis. HCV-induced development of HCC is a gradual process and is affected by the duration of disease and viral genotype. All the chronic viral hepatitis patients should be followed up in 6 months intervals not only with US and AFP for HCC. Despite they have proper treatment there is always the risk development of HCC. Chronic hepatitis patients cannot be dropped from follow up even treated well. Procedia PDF Downloads 1381614 Carbonaceous Monolithic Multi-Channel Denuders as a Gas-Particle Partitioning Tool for the Occupational Sampling of Aerosols from Semi-Volatile Organic Compounds
Authors: Vesta Kohlmeier, George C. Dragan, Juergen Orasche, Juergen Schnelle-Kreis, Dietmar Breuer, Ralf Zimmermann
Abstract:
Aerosols from hazardous semi-volatile organic compounds (SVOC) may occur in workplace air and can simultaneously be found as particle and gas phase. For health risk assessment, it is necessary to collect particles and gases separately. This can be achieved by using a denuder for the gas phase collection, combined with a filter and an adsorber for particle collection. The study focused on the suitability of carbonaceous monolithic multi-channel denuders, so-called Novacarb™-Denuders (MastCarbon International Ltd., Guilford, UK), to achieve gas-particle separation. Particle transmission efficiency experiments were performed with polystyrene latex (PSL) particles (size range 0.51-3 µm), while the time dependent gas phase collection efficiency was analysed for polar and nonpolar SVOC (mass concentrations 7-10 mg/m3) over 2 h at 5 or 10 l/min. The experimental gas phase collection efficiency was also compared with theoretical predictions. For n-hexadecane (C16), the gas phase collection efficiency was max. 91 % for one denuder and max. 98 % for two denuders, while for diethylene glycol (DEG), a maximal gas phase collection efficiency of 93 % for one denuder and 97 % for two denuders was observed. At 5 l/min higher gas phase collection efficiencies were achieved than at 10 l/min. The deviations between the theoretical and experimental gas phase collection efficiencies were up to 5 % for C16 and 23 % for DEG. Since the theoretical efficiency depends on the geometric shape and length of the denuder, flow rate and diffusion coefficients of the tested substances, the obtained values define an upper limit which could be reached. Regarding the particle transmission through the denuders, the use of one denuder showed transmission efficiencies around 98 % for 1-3 µm particle diameters. The use of three denuders resulted in transmission efficiencies from 93-97 % for the same particle sizes. In summary, NovaCarb™-Denuders are well applicable for sampling aerosols of polar/nonpolar substances with particle diameters ≤3 µm and flow rates of 5 l/min or lower. These properties and their compact size make them suitable for use in personal aerosol samplers. This work is supported by the German Social Accident Insurance (DGUV), research contract FP371.Keywords: gas phase collection efficiency, particle transmission, personal aerosol sampler, SVOC
Procedia PDF Downloads 1761613 Exploration of Copper Fabric in Non-Asbestos Organic Brake-Pads for Thermal Conductivity Enhancement
Authors: Vishal Mahale, Jayashree Bijwe, Sujeet K. Sinha
Abstract:
Range of thermal conductivity (TC) of Friction Materials (FMs) is a critical issue since lower TC leads to accumulation of frictional heat on the working surface, which results in excessive fade while higher TC leads to excessive heat flow towards back-plate resulting in boiling of brake-fluid leading to ‘spongy brakes’. This phenomenon prohibits braking action, which is most undesirable. Therefore, TC of the FMs across the brake pads should not be high while along the brake pad, it should be high. To enhance TC, metals in the forms of powder and fibers are used in the FMs. Apart from TC improvement, metals provide strength and structural integrity to the composites. Due to higher TC Copper (Cu) powder/fiber is a most preferred metallic ingredient in FM industry. However, Cu powders/fibers are responsible for metallic wear debris generation, which has harmful effects on aquatic organisms. Hence to get rid of a problem of metallic wear debris generation and to keep the positive effect of TC improvement, incorporation of Cu fabric in NAO brake-pads can be an innovative solution. Keeping this in view, two realistic multi-ingredient FM composites with identical formulations were developed in the form of brake-pads. Out of which one composite series consisted of a single layer of Cu fabric in the body of brake-pad and designated as C1 while double layer of Cu fabric was incorporated in another brake-pad series with designation of C2. Distance of Cu fabric layer from the back-plate was kept constant for C1 and C2. One more composite (C0) was developed without Cu fabric for the sake of comparison. Developed composites were characterized for physical properties. Tribological performance was evaluated on full scale inertia dynamometer by following JASO C 406 testing standard. It was concluded that Cu fabric successfully improved fade resistance by increasing conductivity of the composite and also showed slight improvement in wear resistance. Worn surfaces of pads and disc were analyzed by SEM and EDAX to study wear mechanism.Keywords: brake inertia dynamometer, copper fabric, non-asbestos organic (NAO) friction materials, thermal conductivity enhancement
Procedia PDF Downloads 1321612 Design of a Backlight Hyperspectral Imaging System for Enhancing Image Quality in Artificial Vision Food Packaging Online Inspections
Authors: Ferran Paulí Pla, Pere Palacín Farré, Albert Fornells Herrera, Pol Toldrà Fernández
Abstract:
Poor image acquisition is limiting the promising growth of industrial vision in food control. In recent years, the food industry has witnessed a significant increase in the implementation of automation in quality control through artificial vision, a trend that continues to grow. During the packaging process, some defects may appear, compromising the proper sealing of the products and diminishing their shelf life, sanitary conditions and overall properties. While failure to detect a defective product leads to major losses, food producers also aim to minimize over-rejection to avoid unnecessary waste. Thus, accuracy in the evaluation of the products is crucial, and, given the large production volumes, even small improvements have a significant impact. Recently, efforts have been focused on maximizing the performance of classification neural networks; nevertheless, their performance is limited by the quality of the input data. Monochrome linear backlight systems are most commonly used for online inspections of food packaging thermo-sealing zones. These simple acquisition systems fit the high cadence of the production lines imposed by the market demand. Nevertheless, they provide a limited amount of data, which negatively impacts classification algorithm training. A desired situation would be one where data quality is maximized in terms of obtaining the key information to detect defects while maintaining a fast working pace. This work presents a backlight hyperspectral imaging system designed and implemented replicating an industrial environment to better understand the relationship between visual data quality and spectral illumination range for a variety of packed food products. Furthermore, results led to the identification of advantageous spectral bands that significantly enhance image quality, providing clearer detection of defects.Keywords: artificial vision, food packaging, hyperspectral imaging, image acquisition, quality control
Procedia PDF Downloads 231611 A Diurnal Light Based CO₂ Elevation Strategy for Up-Scaling Chlorella sp. Production by Minimizing Oxygen Accumulation
Authors: Venkateswara R. Naira, Debasish Das, Soumen K. Maiti
Abstract:
Achieving high cell densities of microalgae under obligatory light-limiting and high light conditions of diurnal (low-high-low variations of daylight intensity) sunlight are further limited by CO₂ supply and dissolved oxygen (DO) accumulation in large-scale photobioreactors. High DO levels cause low growth due to photoinhibition and/or photorespiration. Hence, scalable elevated CO₂ levels (% in air) and their effect on DO accumulation in a 10 L cylindrical membrane photobioreactor (a vertical tubular type) are studied in the present study. The CO₂ elevation strategies; biomass-based, pH control based (types II & I) and diurnal light based, were explored to study the growth of Chlorella sp. FC2 IITG under single-sided LED lighting in the laboratory, mimicking diurnal sunlight. All the experiments were conducted in fed-batch mode by maintaining N and P sources at least 50% of initial concentrations of the optimized BG-11 medium. It was observed that biomass-based (2% - 1st day, 2.5% - 2nd day and 3% - thereafter) and well-known pH control based, type-I (5.8 pH throughout) strategies were found lethal for FC2 growth. In both strategies, the highest peak DO accumulation of 150% air saturation was resulted due to high photosynthetic activity caused by higher CO₂ levels. In the pH control based type-I strategy, automatically resulted CO₂ levels for pH control were recorded so high (beyond the inhibition range, 5%). However, pH control based type-II strategy (5.8 – 2 days, 6.3 – 3 days, 6.7 – thereafter) showed final biomass titer up to 4.45 ± 0.05 g L⁻¹ with peak DO of 122% air saturation; high CO₂ levels beyond 5% (in air) were recorded thereafter. Thus, it became sustainable for obtaining high biomass. Finally, a diurnal light based (2% - low light, 2.5 % - medium light and 3% - high light) strategy was applied on the basis of increasing/decreasing photosynthesis due to increase/decrease in diurnal light intensity. It has resulted in maximum final biomass titer of 5.33 ± 0.12 g L⁻¹, with total biomass productivity of 0.59 ± 0.01 g L⁻¹ day⁻¹. The values are remarkably higher than constant 2% CO₂ level (final biomass titer: 4.26 ± 0.09 g L⁻¹; biomass productivity: 0.27 ± 0.005 g L⁻¹ day⁻¹). However, 135% air saturation of peak DO was observed. Thus, the diurnal light based elevation should be further improved by using CO₂ enriched N₂ instead of air. To the best of knowledge, the light-based CO₂ elevation strategy is not reported elsewhere.Keywords: Chlorella sp., CO₂ elevation strategy, dissolved oxygen accumulation, diurnal light based CO₂ elevation, high cell density, microalgae, scale-up
Procedia PDF Downloads 1251610 Cyclic Stress and Masing Behaviour of Modified 9Cr-1Mo at RT and 300 °C
Authors: Preeti Verma, P. Chellapandi, N.C. Santhi Srinivas, Vakil Singh
Abstract:
Modified 9Cr-1Mo steel is widely used for structural components like heat exchangers, pressure vessels and steam generator in the nuclear reactors. It is also found to be a candidate material for future metallic fuel sodium cooled fast breeder reactor because of its high thermal conductivity, lower thermal expansion coefficient, micro structural stability, high irradiation void swelling resistance and higher resistance to stress corrosion cracking in water-steam systems compared to austenitic stainless steels. The components of steam generators that operate at elevated temperatures are often subjected to repeated thermal stresses as a result of temperature gradients which occur on heating and cooling during start-ups and shutdowns or during variations in operating conditions of a reactor. These transient thermal stresses give rise to LCF damage. In the present investigation strain controlled low cycle fatigue tests were conducted at room temperature and 300 °C in normalized and tempered condition using total strain amplitudes in the range from ±0.25% to ±0.5% at strain rate of 10-2 s-1. Cyclic Stress response at high strain amplitudes (±0.31% to ±0.5%) showed initial softening followed by hardening upto a few cycles and subsequent softening till failure. The extent of softening increased with increase in strain amplitude and temperature. Depends on the strain amplitude of the test the stress strain hysteresis loops displayed Masing behaviour at higher strain amplitudes and non-Masing at lower strain amplitudes at both the temperatures. It is quite opposite to the usual Masing and Non-Masing behaviour reported earlier for different materials. Low cycle fatigue damage was evaluated in terms of plastic strain and plastic strain energy approach at room temperature and 300 °C. It was observed that the plastic strain energy approach was found to be more closely matches with the experimental fatigue lives particularly, at 300 °C where dynamic strain aging was observed.Keywords: Modified 9Cr-mo steel, low cycle fatigue, Masing behavior, cyclic softening
Procedia PDF Downloads 4431609 The EU Omnipotence Paradox: Inclusive Cultural Policies and Effects of Exclusion
Authors: Emmanuel Pedler, Elena Raevskikh, Maxime Jaffré
Abstract:
Can the cultural geography of European cities be durably managed by European policies? To answer this question, two hypotheses can be proposed. (1) Either European cultural policies are able to erase cultural inequalities between the territories through the creation of new areas of cultural attractiveness in each beneficiary neighborhood, city or country. Or, (2) each European region historically rooted in a number of endogenous socio-historical, political or demographic factors is not receptive to exogenous political influences. Thus, the cultural attractiveness of a territory is difficult to measure and to impact by top-down policies in the long term. How do these two logics - European and local - interact and contribute to the emergence of a valued, popular sense of a common European cultural identity? Does this constant interaction between historical backgrounds and new political concepts encourage a positive identification with the European project? The European cultural policy programs, such as ECC (European Capital of Culture), seek to develop new forms of civic cohesion through inclusive and participative cultural events. The cultural assets of a city elected ‘ECC’ are mobilized to attract a wide range of new audiences, including populations poorly integrated into local cultural life – and consequently distant from pre-existing cultural offers. In the current context of increasingly heterogeneous individual perceptions of Europe, the ECC program aims to promote cultural forms and institutions that should accelerate both territorial and cross-border European cohesion. The new cultural consumption pattern is conceived to stimulate integration and mobility, but also to create a legitimate and transnational ideal European citizen type. Our comparative research confronts contrasting cases of ‘European Capitals of Culture’ from the south and from the north of Europe, cities recently concerned by the ECC political mechanism and cities that were elected ECC in the past, multi-centered cultural models vs. highly centralized cultural models. We aim to explore the impacts of European policies on the urban cultural geography, but also to understand the current obstacles for its efficient implementation.Keywords: urbanism, cultural policies, cultural institutions, european cultural capitals, heritage industries, exclusion effects
Procedia PDF Downloads 2611608 PhD Students’ Challenges with Impact-Factor in Kazakhstan
Authors: Duishon Shamatov
Abstract:
This presentation is about Kazakhstan’s PhD students’ experiences with impact-factor publication requirement. Since the break-up of the USSR, Kazakhstan has been attempting to improve its higher education system at undergraduate and graduate levels. From March, 2010 Kazakhstan joined Bologna process and entered European space of higher education. To align with the European system of higher education, three level of preparation of specialists (undergraduate, master and PhD) was adopted to replace the Soviet system. The changes were aimed at promoting high quality higher education that meets the demands of labor market and growing needs of the industrial-innovative development of the country, and meeting the international standards. The shift to the European system has brought many benefits, but there are also some serious challenges. One of those challenges is related to the requirements for the PhD candidates to publish in national and international journals. Thus, a PhD candidate should have 7 publications in total, out of which one has to be in an international impact factor journal. A qualitative research was conducted to explore the PhD students’ views of their experiences with impact-factor publications. With the help of purposeful sampling, 30 PhD students from seven universities across Kazakhstan were selected for individual and focus group interviews. The key findings of the study are as follows. While the Kazakh PhD students have no difficulties in publishing in local journals, they face great challenges in attempting to publish in impact-factor journals for a range of reasons. They include but not limited to lack of research and publication skills, poorer knowledge of academic English, not familiarity with the peer review publication processes and expectations, and very short time to get published due to their PhD programme requirements. This situation is pushing some these young scholars explore alternative ways to get published in impact factor journals and they seek to publish by any means and often by any costs (which means even paying large sum of money for a publication). This in turn, creates a myth in the scholars’ circles in Kazakhstan, that to get published in impact factor journals, one should necessarily pay much money. This paper offers some policy recommendations on how to improve preparation of future PhD candidates in Kazakhstan.Keywords: Bologna process, impact-factor publications, post-graduate education, Kazakhstan
Procedia PDF Downloads 3791607 Application and Evaluation of Teaching-Learning Guides Based on Swebok for the Requirements Engineering Area
Authors: Mauro Callejas-Cuervo, Andrea Catherine Alarcon-Aldana, Lorena Paola Castillo-Guerra
Abstract:
The software industry requires highly-trained professionals, capable of developing the roles integrated in the cycle of software development. That is why a large part of the task is the responsibility of higher education institutions; often through a curriculum established to orientate the academic development of the students. It is so that nowadays there are different models that support proposals for the improvement of the curricula for the area of Software Engineering, such as ACM, IEEE, ABET, Swebok, of which the last stands out, given that it manages and organises the knowledge of Software Engineering and offers a vision of theoretical and practical aspects. Moreover, it has been applied by different universities in the pursuit of achieving coverage in delivering the different topics and increasing the professional quality of future graduates. This research presents the structure of teaching and learning guides from the objectives of training and methodological strategies immersed in the levels of learning of Bloom’s taxonomy with which it is intended to improve the delivery of the topics in the area of Requirements Engineering. Said guides were implemented and validated in a course of Requirements Engineering of the Systems and Computer Engineering programme in the Universidad Pedagógica y Tecnológica de Colombia (Pedagogical and Technological University of Colombia) using a four stage methodology: definition of the evaluation model, implementation of the guides, guide evaluation, and analysis of the results. After the collection and analysis of the data, the results show that in six out of the seven topics proposed in the Swebok guide, the percentage of students who obtained total marks within the 'High grade' level, that is between 4.0 and 4.6 (on a scale of 0.0 to 5.0), was higher than the percentage of students who obtained marks within the 'Acceptable' range of 3.0 to 3.9. In 86% of the topics and the strategies proposed, the teaching and learning guides facilitated the comprehension, analysis, and articulation of the concepts and processes of the students. In addition, they mainly indicate that the guides strengthened the argumentative and interpretative competencies, while the remaining 14% denotes the need to reinforce the strategies regarding the propositive competence, given that it presented the lowest average.Keywords: pedagogic guide, pedagogic strategies, requirements engineering, Swebok, teaching-learning process
Procedia PDF Downloads 2861606 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River
Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang
Abstract:
The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander
Procedia PDF Downloads 3191605 Effect of Curing Temperature on the Textural and Rheological of Gelatine-SDS Hydrogels
Authors: Virginia Martin Torrejon, Binjie Wu
Abstract:
Gelatine is a protein biopolymer obtained from the partial hydrolysis of animal tissues which contain collagen, the primary structural component in connective tissue. Gelatine hydrogels have attracted considerable research in recent years as an alternative to synthetic materials due to their outstanding gelling properties, biocompatibility and compostability. Surfactants, such as sodium dodecyl sulfate (SDS), are often used in hydrogels solutions as surface modifiers or solubility enhancers, and their incorporation can influence the hydrogel’s viscoelastic properties and, in turn, its processing and applications. Literature usually focuses on studying the impact of formulation parameters (e.g., gelatine content, gelatine strength, additives incorporation) on gelatine hydrogels properties, but processing parameters, such as curing temperature, are commonly overlooked. For example, some authors have reported a decrease in gel strength at lower curing temperatures, but there is a lack of research on systematic viscoelastic characterisation of high strength gelatine and gelatine-SDS systems at a wide range of curing temperatures. This knowledge is essential to meet and adjust the technological requirements for different applications (e.g., viscosity, setting time, gel strength or melting/gelling temperature). This work investigated the effect of curing temperature (10, 15, 20, 23 and 25 and 30°C) on the elastic modulus (G’) and melting temperature of high strength gelatine-SDS hydrogels, at 10 wt% and 20 wt% gelatine contents, by small-amplitude oscillatory shear rheology coupled with Fourier Transform Infrared Spectroscopy. It also correlates the gel strength obtained by rheological measurements with the gel strength measured by texture analysis. Gelatine and gelatine-SDS hydrogels’ rheological behaviour strongly depended on the curing temperature, and its gel strength and melting temperature can be slightly modified to adjust it to given processing and applications needs. Lower curing temperatures led to gelatine and gelatine-SDS hydrogels with considerably higher storage modulus. However, their melting temperature was lower than those gels cured at higher temperatures and lower gel strength. This effect was more considerable at longer timescales. This behaviour is attributed to the development of thermal-resistant structures in the lower strength gels cured at higher temperatures.Keywords: gelatine gelation kinetics, gelatine-SDS interactions, gelatine-surfactant hydrogels, melting and gelling temperature of gelatine gels, rheology of gelatine hydrogels
Procedia PDF Downloads 1011604 Management of Acute Biliary Pathology at Gozo General Hospital
Authors: Kristian Bugeja, Upeshala A. Jayawardena, Clarissa Fenech, Mark Zammit Vincenti
Abstract:
Introduction: Biliary colic, acute cholecystitis, and gallstone pancreatitis are some of the most common surgical presentations at Gozo General Hospital (GGH). National Institute for Health and Care Excellence (NICE) guidelines advise that suitable patients with acute biliary problems should be offered a laparoscopic cholecystectomy within one week of diagnosis. There has traditionally been difficulty in achieving this mainly due to the reluctance of some surgeons to operate in the acute setting, limited, timely access to MRCP and ERCP, and organizational issues. Methodology: A retrospective study was performed involving all biliary pathology-related admissions to GGH during the two-year period of 2019 and 2020. Patients’ files and electronic case summary (ECS) were used for data collection, which included demographic data, primary diagnosis, co-morbidities, management, waiting time to surgery, length of stay, readmissions, and reason for readmissions. NICE clinical guidance 188 – Gallstone disease were used as the standard. Results: 51 patients were included in the study. The mean age was 58 years, and 35 (68.6%) were female. The main diagnoses on admission were biliary colic in 31 (60.8%), acute cholecystitis in 10 (19.6%). Others included gallstone pancreatitis in 3 (5.89%), chronic cholecystitis in 2 (3.92%), gall bladder malignancy in 4 (7.84%), and ascending cholangitis in 1 (1.97%). Management included laparoscopic cholecystectomy in 34 (66.7%); conservative in 8 (15.7%) and ERCP in 6 (11.7%). The mean waiting time for laparoscopic cholecystectomy for patients with acute cholecystitis was 74 days – range being between 3 and 146 days since the date of diagnosis. Only one patient who was diagnosed with acute cholecystitis and managed with laparoscopic cholecystectomy was done so within the 7-day time frame. Hospital re-admissions were reported in 5 patients (9.8%) due to vomiting (1), ascending cholangitis (1), and gallstone pancreatitis (3). Discussion: Guidelines were not met for patients presenting to Gozo General Hospital with acute biliary pathology. This resulted in 5 patients being re-admitted to hospital while waiting for definitive surgery. The local issues resulting in the delay to surgery need to be identified and steps are taken to facilitate the provision of urgent cholecystectomy for suitable patients.Keywords: biliary colic, acute cholecystits, laparoscopic cholecystectomy, conservative management
Procedia PDF Downloads 1611603 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE
Authors: Parimalah Velo, Ahmad Zakaria
Abstract:
Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging
Procedia PDF Downloads 2711602 Association between a Serotonin Re-Uptake Transporter Gene Polymorphism and Mucosal Serotonin Level in Women Patients with Irritable Bowel Syndrome and Healthy Control: A Pilot Study from Northern India
Authors: Sunil Kumar, Uday C. Ghoshal
Abstract:
Background and aims: Serotonin (5-hydroxtryptamine, 5-HT) is an important factor in gut function, playing key roles in intestinal peristalsis and secretion, and in sensory signaling in the brain-gut axis. Removal from its sites of action is mediated by a specific protein called the serotonin reuptake transporter (SERT). Polymorphisms in the promoter region of the SERT gene have effects on transcriptional activity, resulting in altered 5-HT reuptake efficiency. Functional polymorphisms may underlie disturbance in gut function in individuals suffering with disorders such as irritable bowel syndrome (IBS). The aim of this study was to assess the potential association between SERT polymorphisms and the diarrhea predominant IBS (D-IBS) phenotype Subjects: A total of 36 northern Indian female patients and 55 female northern Indian healthy controls (HC) were subjected to genotyping. Methods: Leucocyte DNA of all subjects was analyzed by polymerase chain reaction based technologies for SERT polymorphisms, specifically the insertion/deletion polymorphism in the promoter (SERT-P). Statistical analysis was performed to assess association of SERT polymorphism allele with the D-IBS phenotype. Results: The frequency of distribution of SERT-P gene was comparable between female patients with IBS and HC (p = 0.086). However, frequency of SERT-P deletion/deletion genotype was significantly higher in female patients with D-IBS compared to C-IBS and A-IBS [17/19 (89.5%) vs. 4/12 (33.3%) vs. 1/5 (20%), p=0.001, respectively]. The mucosal level of serotonin was higher in D-IBS compared to C-IBS and A-IBS [Median, range (159.26, 98.78–212.1) vs. 110.4, 67.87–143.53 vs. 92.34, 78.8–166.3 pmol/mL, p=0.001, respectively]. The mucosal level of serotonin was higher in female patients with IBS with SERT-P deletion/deletion genotype compared deletion/insertion and insertion/insertion [157.65, 67.87–212.1 vs. 110.4, 78.1–143.32 vs. 100.5, 69.1–132.03 pmol/mL, p=0.001, respectively]. Patients with D-IBS with deletion/deletion genotype more often reported symptoms of abdominal pain, discomfort (p=0.025) and bloating (p=0.039). Symptoms development following lactose ingestion was strongly associated with D-IBS and SERT-P deletion/deletion genotype (p=0.004). Conclusions: Significant association was observed between D-IBS and the SERT-P deletion/deletion genotype, suggesting that the serotonin transporter is a potential candidate gene for D-IBS in women.Keywords: serotonin, SERT, inflammatory bowel disease, genetic polymorphism
Procedia PDF Downloads 3331601 A Proper Continuum-Based Reformulation of Current Problems in Finite Strain Plasticity
Authors: Ladislav Écsi, Roland Jančo
Abstract:
Contemporary multiplicative plasticity models assume that the body's intermediate configuration consists of an assembly of locally unloaded neighbourhoods of material particles that cannot be reassembled together to give the overall stress-free intermediate configuration since the neighbourhoods are not necessarily compatible with each other. As a result, the plastic deformation gradient, an inelastic component in the multiplicative split of the deformation gradient, cannot be integrated, and the material particle moves from the initial configuration to the intermediate configuration without a position vector and a plastic displacement field when plastic flow occurs. Such behaviour is incompatible with the continuum theory and the continuum physics of elastoplastic deformations, and the related material models can hardly be denoted as truly continuum-based. The paper presents a proper continuum-based reformulation of current problems in finite strain plasticity. It will be shown that the incompatible neighbourhoods in real material are modelled by the product of the plastic multiplier and the yield surface normal when the plastic flow is defined in the current configuration. The incompatible plastic factor can also model the neighbourhoods as the solution of the system of differential equations whose coefficient matrix is the above product when the plastic flow is defined in the intermediate configuration. The incompatible tensors replace the compatible spatial plastic velocity gradient in the former case or the compatible plastic deformation gradient in the latter case in the definition of the plastic flow rule. They act as local imperfections but have the same position vector as the compatible plastic velocity gradient or the compatible plastic deformation gradient in the definitions of the related plastic flow rules. The unstressed intermediate configuration, the unloaded configuration after the plastic flow, where the residual stresses have been removed, can always be calculated by integrating either the compatible plastic velocity gradient or the compatible plastic deformation gradient. However, the corresponding plastic displacement field becomes permanent with both elastic and plastic components. The residual strains and stresses originate from the difference between the compatible plastic/permanent displacement field gradient and the prescribed incompatible second-order tensor characterizing the plastic flow in the definition of the plastic flow rule, which becomes an assignment statement rather than an equilibrium equation. The above also means that the elastic and plastic factors in the multiplicative split of the deformation gradient are, in reality, gradients and that there is no problem with the continuum physics of elastoplastic deformations. The formulation is demonstrated in a numerical example using the regularized Mooney-Rivlin material model and modified equilibrium statements where the intermediate configuration is calculated, whose analysis results are compared with the identical material model using the current equilibrium statements. The advantages and disadvantages of each formulation, including their relationship with multiplicative plasticity, are also discussed.Keywords: finite strain plasticity, continuum formulation, regularized Mooney-Rivlin material model, compatibility
Procedia PDF Downloads 1231600 The Literary Works of Sir Sayeed Ahmed Khan and Its Impact on Indian Muslims
Authors: Mohammad Arifur Rahman
Abstract:
The research study aims to bring to light the contribution of sir Sayeed Ahmed in the realm of education and literature. Sir Sayeed Ahmed Khan (1817 –1898), commonly known as Sir Sayeed, was an Indian Muslim leader, Islamic modernist, philosopher and social reformer of the nineteenth century. He earned a reputation as a distinguished scholar while working as a jurist for British India. During the Indian Rebellion of 1857, he remained loyal to the British Empire and was noted for his actions in saving European lives. Believing that the future of Muslims was threatened by the rigidity of their orthodox outlook, Sir Sayeed began promoting Western–style scientific education by founding modern schools and journals and organizing Muslim entrepreneurs. He was one of the founders of the Aligarh Movement and Aligarh Muslim University. He began focusing on writing, from his early life, on various subjects, mainly educational issues. He launched his attempts to revive the spirit of progress within the Muslim community of India. Therefore, modern education became the pivot of his movement for the regeneration of the Indian Muslims. Sayeed Ahmed Khan found time for literary and scholarly pursuits. The range of his literary and scholarly interests was very wide, comprising all the major areas: education, law, philosophy, history, politics, archeology, journalism, Muslim modernism, literature, science and culture, mainly based on his comprehensive religious ideas should be well measured in view to making out him and his contribution to the context. The books written by himself and the books composed by him by some of the great writers like Altaf Hussein Hali, Hafee z Malick, Nasim Rashid, and Christian W. Troll were studied to understand him and his contribution. The readers of this paper would benefit from dispelling the hazy ideas about this great man of India who made an immense contribution. Further research should be undertaken to know more about the different sides of his thought and personality. The qualitative and the historical methods are adopted for the accomplishment of the work.Keywords: thinker, reformer, educator and Philosopher, modernist
Procedia PDF Downloads 1011599 Impact of Urbanization Growth on Disease Spread and Outbreak Response: Exploring Strategies for Enhancing Resilience
Authors: Raquel Vianna Duarte Cardoso, Eduarda Lobato Faria, José Jorge Boueri
Abstract:
Rapid urbanization has transformed the global landscape, presenting significant challenges to public health. This article delves into the impact of urbanization on the spread of infectious diseases in cities and identifies crucial strategies to enhance urban community resilience. Massive urbanization over recent decades has created conducive environments for the rapid spread of diseases due to population density, mobility, and unequal living conditions. Urbanization has been observed to increase exposure to pathogens and foster conditions conducive to disease outbreaks, including seasonal flu, vector-borne diseases, and respiratory infections. In order to tackle these issues, a range of cross-disciplinary approaches are suggested. These encompass the enhancement of urban healthcare infrastructure, emphasizing the need for robust investments in hospitals, clinics, and healthcare systems to keep pace with the burgeoning healthcare requirements in urban environments. Moreover, the establishment of disease monitoring and surveillance mechanisms is indispensable, as it allows for the timely detection of outbreaks, enabling swift responses. Additionally, community engagement and education play a pivotal role in advocating for personal hygiene, vaccination, and preventive measures, thus playing a pivotal role in diminishing disease transmission. Lastly, the promotion of sustainable urban planning, which includes the creation of cities with green spaces, access to clean water, and proper sanitation, can significantly mitigate the risks associated with waterborne and vector-borne diseases. The article is based on a review of scientific literature, and it offers a comprehensive insight into the complexities of the relationship between urbanization and health. It places a strong emphasis on the urgent need for integrated approaches to improve urban resilience in the face of health challenges.Keywords: infectious diseases dissemination, public health, urbanization impacts, urban resilience
Procedia PDF Downloads 771598 Research on Configuration of Large-Scale Linear Array Feeder Truss Parabolic Cylindrical Antenna of Satellite
Authors: Chen Chuanzhi, Guo Yunyun
Abstract:
The large linear array feeding parabolic cylindrical antenna of the satellite has the ability of large-area line focusing, multi-directional beam clusters simultaneously in a certain azimuth plane and elevation plane, corresponding quickly to different orientations and different directions in a wide frequency range, dual aiming of frequency and direction, and combining space power. Therefore, the large-diameter parabolic cylindrical antenna has become one of the new development directions of spaceborne antennas. Limited by the size of the rocked fairing, the large-diameter spaceborne antenna is required to be small mass and have a deployment function. After being orbited, the antenna can be deployed by expanding and be stabilized. However, few types of structures can be used to construct large cylindrical shell structures in existing structures, which greatly limits the development and application of such antennas. Aiming at high structural efficiency, the geometrical characteristics of parabolic cylinders and mechanism topological mapping law to the expandable truss are studied, and the basic configuration of deployable truss with cylindrical shell is structured. Then a modular truss parabolic cylindrical antenna is designed in this paper. The antenna has the characteristics of stable structure, high precision of reflecting surface formation, controllable motion process, high storage rate, and lightweight, etc. On the basis of the overall configuration comprehensive theory and optimization method, the structural stiffness of the modular truss parabolic cylindrical antenna is improved. And the bearing density and impact resistance of support structure are improved based on the internal tension optimal distribution method of reflector forming. Finally, a truss-type cylindrical deployable support structure with high constriction-deployment ratio, high stiffness, controllable deployment, and low mass is successfully developed, laying the foundation for the application of large-diameter parabolic cylindrical antennas in satellite antennas.Keywords: linear array feed antenna, truss type, parabolic cylindrical antenna, spaceborne antenna
Procedia PDF Downloads 1581597 Incidence of Lymphoma and Gonorrhea Infection: A Retrospective Study
Authors: Diya Kohli, Amalia Ardeljan, Lexi Frankel, Jose Garcia, Lokesh Manjani, Omar Rashid
Abstract:
Gonorrhea is the second most common sexually transmitted disease (STDs) in the United States of America. Gonorrhea affects the urethra, rectum, or throat and the cervix in females. Lymphoma is a cancer of the immune network called the lymphatic system that includes the lymph nodes/glands, spleen, thymus gland, and bone marrow. Lymphoma can affect many organs in the body. When a lymphocyte develops a genetic mutation, it signals other cells into rapid proliferation that causes many mutated lymphocytes. Multiple studies have explored the incidence of cancer in people infected with STDs such as Gonorrhea. For instance, the studies conducted by Wang Y-C and Co., as well as Caini, S and Co. established a direct co-relationship between Gonorrhea infection and incidence of prostate cancer. We hypothesize that Gonorrhea infection also increases the incidence of Lymphoma in patients. This research study aimed to evaluate the correlation between Gonorrhea infection and the incidence of Lymphoma. The data for the research was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database. This database was utilized to evaluate patients infected with Gonorrhea versus the ones who were not infected to establish a correlation with the prevalence of Lymphoma using ICD-10 and ICD-9 codes. Access to the database was granted by the Holy Cross Health, Fort Lauderdale for academic research. Standard statistical methods were applied throughout. Between January 2010 and December 2019, the query was analyzed and resulted in 254 and 808 patients in both the infected and control group, respectively. The two groups were matched by Age Range and CCI score. The incidence of Lymphoma was 0.998% (254 patients out of 25455) in the Gonorrhea group (patients infected with Gonorrhea that was Lymphoma Positive) compared to 3.174% and 808 patients in the control group (Patients negative for Gonorrhea but with Lymphoma). This was statistically significant by a p-value < 2.210-16 with an OR= 0.431 (95% CI 0.381-0.487). The patients were then matched by antibiotic treatment to avoid treatment bias. The incidence of Lymphoma was 1.215% (82 patients out of 6,748) in the Gonorrhea group compared to 2.949% (199 patients out of 6748) in the control group. This was statistically significant by a p-value <5.410-10 with an OR= 0.468 (95% CI 0.367-0.596). The study shows a statistically significant correlation between Gonorrhea and a reduced incidence of Lymphoma. Further evaluation is recommended to assess the potential of Gonorrhea in reducing Lymphoma.Keywords: gonorrhea, lymphoma, STDs, cancer, ICD
Procedia PDF Downloads 1951596 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 1061595 A Simplified, Low-Cost Mechanical Design for an Automated Motorized Mechanism to Clean Large Diameter Pipes
Authors: Imad Khan, Imran Shafi, Sarmad Farooq
Abstract:
Large diameter pipes, barrels, tubes, and ducts are used in a variety of applications covering civil and defense-related technologies. This may include heating/cooling networks, sign poles, bracing, casing, and artillery and tank gun barrels. These large diameter assemblies require regular inspection and cleaning to increase their life and reduce replacement costs. This paper describes the design, development, and testing results of an efficient yet simplified, low maintenance mechanical design controlled with minimal essential electronics using an electric motor for a non-technical staff. The proposed solution provides a simplified user interface and an automated cleaning mechanism that requires a single user to optimally clean pipes and barrels in the range of 105 mm to 203 mm caliber. The proposed system employs linear motion of specially designed brush along the barrel using a chain of specific strength and a pulley anchor attached to both ends of the barrel. A specially designed and manufactured gearbox is coupled with an AC motor to allow movement of contact brush with high torque to allow efficient cleaning. A suitably powered AC motor is fixed to the front adapter mounted on the muzzle side whereas the rear adapter has a pulley-based anchor mounted towards the breach block in case of a gun barrel. A mix of soft nylon and hard copper bristles-based large surface brush is connected through a strong steel chain to motor and anchor pulley. The system is equipped with limit switches to auto switch the direction when one end is reached on its operation. The testing results based on carefully established performance indicators indicate the superiority of the proposed user-friendly cleaning mechanism vis-à-vis its life cycle cost.Keywords: pipe cleaning mechanism, limiting switch, pipe cleaning robot, large pipes
Procedia PDF Downloads 110