Search results for: air flow rates
1086 Non-Melanoma Skin Cancer of Cephalic Extremity – Clinical and Histological Aspects
Authors: Razvan Mercut, Mihaela Ionescu, Vlad Parvanescu, Razvan Ghita, Tudor-Gabriel Caragea, Cristina Simionescu, Marius-Eugen Ciurea
Abstract:
Introduction: Over the past years, the incidence of non-melanoma skin cancer (NMSC) has continuously increased, being one of the most commonly diagnosed carcinomasofthe cephalic extremity. NMSC regroups basal cell carcinoma (BCC), squamous cell carcinoma (SCC), Merkel cell carcinoma, cutaneous lymphoma, and sarcoma. The most common forms are BCC and SCC, both still implying a significant level of morbidity due to local invasion (especially BCC), even if the overall death rates are declining. The objective of our study was the evaluation of clinical and histological aspects of NMSC for a group of patients with BCC and SCC, from Craiova, a south-western major city in Romania. Materialand method: Our study lot comprised 65 patients, with an almost equal distribution of sexes, and ages between 23-91 years old (mean value±standard deviation62.61±16.67), all treated within the Clinic of Plastic Surgery and Reconstructive Microsurgery, Clinical Emergency County Hospital Craiova, Romania, between 2019-2020. In order to determine the main morphological characteristics of both studied cancers, we used paraffin embedding techniques, with various staining methods:hematoxylin-eosin, Masson's trichrome stain with aniline blue, and Periodic acid-schiffAlcian Blue. The statistical study was completed using Microsoft Excel (Microsoft Corp., Redmond, WA, USA), with XLSTAT (Addinsoft SARL, Paris, France). Results: The overall results of our study indicate that BCC accounts for 67.69% of all NMSC forms; SCC covers 27.69%, while 4.62% are representedby other forms. The most frequent site is the nose for BCC (27.69%, 18 patients), being followed by preauricular regions, forehead, and periorbital areas. For patients with SCC, tumors were mainly located at lips level (66.67%, 12 patients). The analysis of NMSC histological forms indicated that nodular BCC is predominant (45.45%, 20 patients), as well as ulcero-vegetant SCC (38.89%, 7 patients). We have not identified any topographic characteristics or NMSC forms significantly related to age or sex. Conclusions: The most frequent NMSC form identified for our study lot was BCC. The preferred location was the nose for BCC. For SCC, the oral cavity is the most frequent anatomical site, especially the lips level. Nodular BCC and ulcero-vegetant SCC were the most commonly identified histological types. Our findings emphasize the need for periodic screening, in order to improve prevention and early treatment for these malignancies.Keywords: non-melanoma skin cancer, basal cell carcinoma, squamous cell carcinoma, histological
Procedia PDF Downloads 1891085 Treatment of Low-Grade Iron Ore Using Two Stage Wet High-Intensity Magnetic Separation Technique
Authors: Moses C. Siame, Kazutoshi Haga, Atsushi Shibayama
Abstract:
This study investigates the removal of silica, alumina and phosphorus as impurities from Sanje iron ore using wet high-intensity magnetic separation (WHIMS). Sanje iron ore contains low-grade hematite ore found in Nampundwe area of Zambia from which iron is to be used as the feed in the steelmaking process. The chemical composition analysis using X-ray Florence spectrometer showed that Sanje low-grade ore contains 48.90 mass% of hematite (Fe2O3) with 34.18 mass% as an iron grade. The ore also contains silica (SiO2) and alumina (Al2O3) of 31.10 mass% and 7.65 mass% respectively. The mineralogical analysis using X-ray diffraction spectrometer showed hematite and silica as the major mineral components of the ore while magnetite and alumina exist as minor mineral components. Mineral particle distribution analysis was done using scanning electron microscope with an X-ray energy dispersion spectrometry (SEM-EDS) and images showed that the average mineral size distribution of alumina-silicate gangue particles is in order of 100 μm and exists as iron-bearing interlocked particles. Magnetic separation was done using series L model 4 Magnetic Separator. The effect of various magnetic separation parameters such as magnetic flux density, particle size, and pulp density of the feed was studied during magnetic separation experiments. The ore with average particle size of 25 µm and pulp density of 2.5% was concentrated using pulp flow of 7 L/min. The results showed that 10 T was optimal magnetic flux density which enhanced the recovery of 93.08% of iron with 53.22 mass% grade. The gangue mineral particles containing 12 mass% silica and 3.94 mass% alumna remained in the concentrate, therefore the concentrate was further treated in the second stage WHIMS using the same parameters from the first stage. The second stage process recovered 83.41% of iron with 67.07 mass% grade. Silica was reduced to 2.14 mass% and alumina to 1.30 mass%. Accordingly, phosphorus was also reduced to 0.02 mass%. Therefore, the two stage magnetic separation process was established using these results.Keywords: Sanje iron ore, magnetic separation, silica, alumina, recovery
Procedia PDF Downloads 2591084 By Removing High-Performance Aerobic Scope Phenotypes, Capture Fisheries May Reduce the Resilience of Fished Populations to Thermal Variability and Compromise Their Persistence into the Anthropocene.
Authors: Lauren A. Bailey, Amber R. Childs, Nicola C. James, Murray I. Duncan, Alexander Winkler, Warren M. Potts
Abstract:
For the persistence of fished populations in the Anthropocene, it is critical to predict how fished populations will respond to the coupled threats of exploitation and climate change for adaptive management. The resilience of fished populations will depend on their capacity for physiological plasticity and acclimatization in response to environmental shifts. However, there is evidence for the selection of physiological traits by capture fisheries. Hence, fish populations may have a limited scope for the rapid expansion of their tolerance ranges or physiological adaptation under fishing pressures. To determine the physiological vulnerability of fished populations in the Anthropocene, the metabolic performance was compared between a fished and spatially protected Chrysoblephus laticeps population in response to thermal variability. Individual aerobic scope phenotypes were quantified using intermittent flow respirometry by comparing changes in energy expenditure of each individual at ecologically relevant temperatures, mimicking variability experienced as a result of upwelling and downwelling events. The proportion of high and low-performance individuals were compared between the fished and spatially protected population. The fished population had limited aerobic scope phenotype diversity and fewer high-performance phenotypes, resulting in a significantly lower aerobic scope curve across low (10 °C) and high (24 °C) thermal treatments. The performance of fished populations may be compromised with predicted future increases in cold upwelling events. This requires the conservation of the physiologically fittest individuals in spatially protected areas, which can recruit into nearby fished areas, as a climate resilience tool.Keywords: climate change, fish physiology, metabolic shifts, over-fishing, respirometry
Procedia PDF Downloads 1281083 Treatment of a Galvanization Wastewater in a Fixed-Bed Column Using L. hyperborean and P. canaliculata Macroalgae as Natural Cation Exchangers
Authors: Tatiana A. Pozdniakova, Maria A. P. Cechinel, Luciana P. Mazur, Rui A. R. Boaventura, Vitor J. P. Vilar.
Abstract:
Two brown macroalgae, Laminaria hyperborea and Pelvetia canaliculata, were employed as natural cation exchangers in a fixed-bed column for Zn(II) removal from a galvanization wastewater. The column (4.8 cm internal diameter) was packed with 30-59 g of previously hydrated algae up to a bed height of 17-27 cm. The wastewater or eluent was percolated using a peristaltic pump at a flow rate of 10 mL/min. The effluent used in each experiment presented similar characteristics: pH of 6.7, 55 mg/L of chemical oxygen demand and about 300, 44, 186 and 244 mg/L of sodium, calcium, chloride and sulphate ions, respectively. The main difference was nitrate concentration: 20 mg/L for the effluent used with L. hyperborean and 341 mg/L for the effluent used with P. canaliculata. The inlet zinc concentration also differed slightly: 11.2 mg/L for L. hyperborean and 8.9 mg/L for P. canaliculata experiments. The breakthrough time was approximately 22.5 hours for both macroalgae, corresponding to a service capacity of 43 bed volumes. This indicates that 30 g of biomass is able to treat 13.5 L of the galvanization wastewater. The uptake capacities at the saturation point were similar to that obtained in batch studies (unpublished data) for both algae. After column exhaustion, desorption with 0.1 M HNO3 was performed. Desorption using 9 and 8 bed volumes of eluent achieved an efficiency of 100 and 91%, respectively for L. hyperborean and P. canaliculata. After elution with nitric acid, the column was regenerated using different strategies: i) convert all the binding sites in the sodium form, by passing a solution of 0.5 M NaCl, until achieve a final pH of 6.0; ii) passing only tap water in order to increase the solution pH inside the column until pH 3.0, and in this case the second sorption cycle was performed using protonated algae. In the first approach, in order to remove the excess of salt inside the column, distilled water was passed through the column, leading to the algae structure destruction and the column collapsed. Using the second approach, the algae remained intact during three consecutive sorption/desorption cycles without loss of performance.Keywords: biosorption, zinc, galvanization wastewater, packed-bed column
Procedia PDF Downloads 3121082 Kirchoff Type Equation Involving the p-Laplacian on the Sierpinski Gasket Using Nehari Manifold Technique
Authors: Abhilash Sahu, Amit Priyadarshi
Abstract:
In this paper, we will discuss the existence of weak solutions of the Kirchhoff type boundary value problem on the Sierpinski gasket. Where S denotes the Sierpinski gasket in R² and S₀ is the intrinsic boundary of the Sierpinski gasket. M: R → R is a positive function and h: S × R → R is a suitable function which is a part of our main equation. ∆p denotes the p-Laplacian, where p > 1. First of all, we will define a weak solution for our problem and then we will show the existence of at least two solutions for the above problem under suitable conditions. There is no well-known concept of a generalized derivative of a function on a fractal domain. Recently, the notion of differential operators such as the Laplacian and the p-Laplacian on fractal domains has been defined. We recall the result first then we will address the above problem. In view of literature, Laplacian and p-Laplacian equations are studied extensively on regular domains (open connected domains) in contrast to fractal domains. In fractal domains, people have studied Laplacian equations more than p-Laplacian probably because in that case, the corresponding function space is reflexive and many minimax theorems which work for regular domains is applicable there which is not the case for the p-Laplacian. This motivates us to study equations involving p-Laplacian on the Sierpinski gasket. Problems on fractal domains lead to nonlinear models such as reaction-diffusion equations on fractals, problems on elastic fractal media and fluid flow through fractal regions etc. We have studied the above p-Laplacian equations on the Sierpinski gasket using fibering map technique on the Nehari manifold. Many authors have studied the Laplacian and p-Laplacian equations on regular domains using this Nehari manifold technique. In general Euler functional associated with such a problem is Frechet or Gateaux differentiable. So, a critical point becomes a solution to the problem. Also, the function space they consider is reflexive and hence we can extract a weakly convergent subsequence from a bounded sequence. But in our case neither the Euler functional is differentiable nor the function space is known to be reflexive. Overcoming these issues we are still able to prove the existence of at least two solutions of the given equation.Keywords: Euler functional, p-Laplacian, p-energy, Sierpinski gasket, weak solution
Procedia PDF Downloads 2341081 An Analysis of LoRa Networks for Rainforest Monitoring
Authors: Rafael Castilho Carvalho, Edjair de Souza Mota
Abstract:
As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest
Procedia PDF Downloads 891080 Optimization of Heat Source Assisted Combustion on Solid Rocket Motors
Authors: Minal Jain, Vinayak Malhotra
Abstract:
Solid Propellant ignition consists of rapid and complex events comprising of heat generation and transfer of heat with spreading of flames over the entire burning surface area. Proper combustion and thus propulsion depends heavily on the modes of heat transfer characteristics and cavity volume. Fire safety is an integral component of a successful rocket flight failing to which may lead to overall failure of the rocket. This leads to enormous forfeiture in resources viz., money, time, and labor involved. When the propellant is ignited, thrust is generated and the casing gets heated up. This heat adds on to the propellant heat and the casing, if not at proper orientation starts burning as well, leading to the whole rocket being completely destroyed. This has necessitated active research efforts emphasizing a comprehensive study on the inter-energy relations involved for effective utilization of the solid rocket motors for better space missions. Present work is focused on one of the major influential aspects of this detrimental burning which is the presence of an external heat source, in addition to a potential heat source which is already ignited. The study is motivated by the need to ensure better combustion and fire safety presented experimentally as a simplified small-scale mode of a rocket carrying a solid propellant inside a cavity. The experimental setup comprises of a paraffin wax candle as the pilot fuel and incense stick as the external heat source. The candle is fixed and the incense stick position and location is varied to investigate the find the influence of the pilot heat source. Different configurations of the external heat source presence with separation distance are tested upon. Regression rates of the pilot thin solid fuel are noted to fundamentally understand the non-linear heat and mass transfer which is the governing phenomenon. An attempt is made to understand the phenomenon fundamentally and the mechanism governing it. Results till now indicate non-linear heat transfer assisted with the occurrence of flaming transition at selected critical distances. With an increase in separation distance, the effect is noted to drop in a non-monotonic trend. The parametric study results are likely to provide useful physical insight about the governing physics and utilization in proper testing, validation, material selection, and designing of solid rocket motors with enhanced safety.Keywords: combustion, propellant, regression, safety
Procedia PDF Downloads 1611079 DNA Fragmentation and Apoptosis in Human Colorectal Cancer Cell Lines by Sesamum indicum Dried Seeds
Authors: Mohd Farooq Naqshbandi
Abstract:
The four fractions of aqueous extract of Sesame Seeds (Sesamum indicum L.) were studied for invitro DNA fragmentation, cell migration, and cellular apoptosis on SW480 and HTC116 human colorectal cancer cell lines. The seeds of Sesamum indicum were extracted with six solvents, including Methanol, Ethanol, Aqueous, Chloroform, Acetonitrile, and Hexane. The aqueous extract (IC₅₀ value 154 µg/ml) was found to be the most active in terms of cytotoxicity with SW480 human colorectal cancer cell lines. Further fractionation of this aqueous extract on flash chromatography gave four fractions. These four fractions were studied for anticancer and DNA binding studies. Cell viability was assessed by colorimetric assay (MTT). IC₅₀ values for all these four fractions ranged from 137 to 548 µg/mL for the HTC116 cancer cell line and 141 to 402 µg/mL for the SW480 cancer cell line. The four fractions showed good anticancer and DNA binding properties. The DNA binding constants ranged from 10.4 ×10⁴ 5 to 28.7 ×10⁴, showing good interactions with DNA. The DNA binding interactions were due to intercalative and π-π electron forces. The results indicate that aqueous extract fractions of sesame showed inhibition of cell migration of SW480 and HTC116 human colorectal cancer cell lines and induced DNA fragmentation and apoptosis. This was demonstrated by calculating the low wound closure percentage in cells treated with these fractions as compared to the control (80%). Morphological features of nuclei of cells treated with fractions revealed chromatin compression, nuclear shrinkage, and apoptotic body formation, which indicate cell death by apoptosis. The flow cytometer of fraction-treated cells of SW480 and HTC116 human colorectal cancer cell lines revealed death due to apoptosis. The results of the study indicate that aqueous extract of sesame seeds may be used to treat colorectal cancer.Keywords: Sesamum indicum, cell migration inhibition, apoptosis induction, anticancer activity, colorectal cancer
Procedia PDF Downloads 881078 Structure Clustering for Milestoning Applications of Complex Conformational Transitions
Authors: Amani Tahat, Serdal Kirmizialtin
Abstract:
Trajectory fragment methods such as Markov State Models (MSM), Milestoning (MS) and Transition Path sampling are the prime choice of extending the timescale of all atom Molecular Dynamics simulations. In these approaches, a set of structures that covers the accessible phase space has to be chosen a priori using cluster analysis. Structural clustering serves to partition the conformational state into natural subgroups based on their similarity, an essential statistical methodology that is used for analyzing numerous sets of empirical data produced by Molecular Dynamics (MD) simulations. Local transition kernel among these clusters later used to connect the metastable states using a Markovian kinetic model in MSM and a non-Markovian model in MS. The choice of clustering approach in constructing such kernel is crucial since the high dimensionality of the biomolecular structures might easily confuse the identification of clusters when using the traditional hierarchical clustering methodology. Of particular interest, in the case of MS where the milestones are very close to each other, accurate determination of the milestone identity of the trajectory becomes a challenging issue. Throughout this work we present two cluster analysis methods applied to the cis–trans isomerism of dinucleotide AA. The choice of nucleic acids to commonly used proteins to study the cluster analysis is two fold: i) the energy landscape is rugged; hence transitions are more complex, enabling a more realistic model to study conformational transitions, ii) Nucleic acids conformational space is high dimensional. A diverse set of internal coordinates is necessary to describe the metastable states in nucleic acids, posing a challenge in studying the conformational transitions. Herein, we need improved clustering methods that accurately identify the AA structure in its metastable states in a robust way for a wide range of confused data conditions. The single linkage approach of the hierarchical clustering available in GROMACS MD-package is the first clustering methodology applied to our data. Self Organizing Map (SOM) neural network, that also known as a Kohonen network, is the second data clustering methodology. The performance comparison of the neural network as well as hierarchical clustering method is studied by means of computing the mean first passage times for the cis-trans conformational rates. Our hope is that this study provides insight into the complexities and need in determining the appropriate clustering algorithm for kinetic analysis. Our results can improve the effectiveness of decisions based on clustering confused empirical data in studying conformational transitions in biomolecules.Keywords: milestoning, self organizing map, single linkage, structure clustering
Procedia PDF Downloads 2241077 Effect of Loop Diameter, Height and Insulation on a High Temperature CO2 Based Natural Circulation Loop
Authors: S. Sadhu, M. Ramgopal, S. Bhattacharyya
Abstract:
Natural circulation loops (NCLs) are buoyancy driven flow systems without any moving components. NCLs have vast applications in geothermal, solar and nuclear power industry where reliability and safety are of foremost concern. Due to certain favorable thermophysical properties, especially near supercritical regions, carbon dioxide can be considered as an ideal loop fluid in many applications. In the present work, a high temperature NCL that uses supercritical carbon dioxide as loop fluid is analysed. The effects of relevant design and operating variables on loop performance are studied. The system operating under steady state is modelled taking into account the axial conduction through loop fluid and loop wall, and heat transfer with surroundings. The heat source is considered to be a heater with controlled heat flux and heat sink is modelled as an end heat exchanger with water as the external cold fluid. The governing equations for mass, momentum and energy conservation are normalized and are solved numerically using finite volume method. Results are obtained for a loop pressure of 90 bar with the power input varying from 0.5 kW to 6.0 kW. The numerical results are validated against the experimental results reported in the literature in terms of the modified Grashof number (Grm) and Reynolds number (Re). Based on the results, buoyancy and friction dominated regions are identified for a given loop. Parametric analysis has been done to show the effect of loop diameter, loop height, ambient temperature and insulation. The results show that for the high temperature loop, heat loss to surroundings affects the loop performance significantly. Hence this conjugate heat transfer between the loop and surroundings has to be considered in the analysis of high temperature NCLs.Keywords: conjugate heat transfer, heat loss, natural circulation loop, supercritical carbon dioxide
Procedia PDF Downloads 2411076 The Development and Testing of a Small Scale Dry Electrostatic Precipitator for the Removal of Particulate Matter
Authors: Derek Wardle, Tarik Al-Shemmeri, Neil Packer
Abstract:
This paper presents a small tube/wire type electrostatic precipitator (ESP). In the ESPs present form, particle charging and collecting voltages and airflow rates were individually varied throughout 200 ambient temperature test runs ranging from 10 to 30 kV in increments on 5 kV and 0.5 m/s to 1.5 m/s, respectively. It was repeatedly observed that, at input air velocities of between 0.5 and 0.9 m/s and voltage settings of 20 kV to 30 kV, the collection efficiency remained above 95%. The outcomes of preliminary tests at combustion flue temperatures are, at present, inconclusive although indications are that there is little or no drop in comparable performance during ideal test conditions. A limited set of similar tests was carried out during which the collecting electrode was grounded, having been disconnected from the static generator. The collecting efficiency fell significantly, and for that reason, this approach was not pursued further. The collecting efficiencies during ambient temperature tests were determined by mass balance between incoming and outgoing dry PM. The efficiencies of combustion temperature runs are determined by analysing the difference in opacity of the flue gas at inlet and outlet compared to a reference light source. In addition, an array of Leit tabs (carbon coated, electrically conductive adhesive discs) was placed at inlet and outlet for a number of four-day continuous ambient temperature runs. Analysis of the discs’ contamination was carried out using scanning electron microscopy and ImageJ computer software that confirmed collection efficiencies of over 99% which gave unequivocal support to all the previous tests. The average efficiency for these runs was 99.409%. Emissions collected from a woody biomass combustion unit, classified to a diameter of 100 µm, were used in all ambient temperature trials test runs apart from two which collected airborne dust from within the laboratory. Sawdust and wood pellets were chosen for laboratory and field combustion trials. Video recordings were made of three ambient temperature test runs in which the smoke from a wood smoke generator was drawn through the precipitator. Although these runs were visual indicators only, with no objective other than to display, they provided a strong argument for the device’s claimed efficiency, as no emissions were visible at exit when energised. The theoretical performance of ESPs, when applied to the geometry and configuration of the tested model, was compared to the actual performance and was shown to be in good agreement with it.Keywords: electrostatic precipitators, air quality, particulates emissions, electron microscopy, image j
Procedia PDF Downloads 2531075 Systematic Identification of Noncoding Cancer Driver Somatic Mutations
Authors: Zohar Manber, Ran Elkon
Abstract:
Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements
Procedia PDF Downloads 1041074 To Compare the Visual Outcome, Safety and Efficacy of Phacoemulsification and Small-Incision Cataract Surgery (SICS) at CEITC, Bangladesh
Authors: Rajib Husain, Munirujzaman Osmani, Mohammad Shamsal Islam
Abstract:
Purpose: To compare the safety, efficacy and visual outcome of phacoemulsification vs. manual small-incision cataract surgery (SICS) for the treatment of cataract in Bangladesh. Objectives: 1. To assess the Visual outcome after cataract surgery 2. To understand the post-operative complications and early rehabilitation 3. To identified which surgical procedure more attractive to the patients 4. To identify which surgical procedure is occurred fewer complications. 5. To find out the socio-economic and demographic characteristics of study patients Setting: Chittagong Eye Infirmary and Training Complex, Chittagong, Bangladesh. Design: Retrospective, randomised comparison of 300 patients with visually significant cataracts. Method: The present study was designed as a retrospective hospital-based research. The sample size was 300 and study period was from July, 2012 to July, 2013 and assigned randomly to receive either phacoemulsification or manual small-incision cataract surgery (SICS). Preoperative and post-operative data were collected through a well designed collection format. Three follow-up were done; i) during discharge ii) 1-3 weeks & iii) 4-11 weeks post operatively. All preoperative and surgical complications, uncorrected and best-corrected visual acuity (BCVA) and astigmatism were taken into consideration for comparison of outcome Result: Nearly 95% patients were more than 40 years of age. About 52% patients were female, and 48% were male. 52% (N=157) patients came to operate their first eye where 48% (N=143) patients were visited again to operate their second eye. Postoperatively, five eyes (3.33%) developed corneal oedema with >10 Descemets folds, and six eyes (4%) had corneal oedema with <10 Descemets folds for Phacoemulsification surgeries. For SICS surgeries, seven eyes (4.66%) developed corneal oedema with >10 Descemets folds and eight eyes (5.33%) had corneal oedema with < 10 descemets folds. However, both the uncorrected and corrected (4-11 weeks) visual acuities were better in the eyes that had phacoemulsification (p=0.02 and p=0.03), and there was less astigmatism (p=0.001) at 4-11 weeks in the eye that had phacoemulsification. Best-corrected visual acuity (BCVA) of final follow-up 95% (N=253) had a good outcome, borderline 3.10% (N=40) and poor outcome was 1.6% (N=7). The individual surgeon outcome were closer, 95% (BCVA) in SICS and 96% (BCVA) in Phacoemulsification at 4-11 weeks follow-up respectively. Conclusion: outcome of cataract surgery both Phacoemulsification and SICS in CEITC was more satisfactory according to who norms. Both Phacoemulsification and manual small-incision cataract surgery (SICS) shows excellent visual outcomes with low complication rates and good rehabilitation. Phacoemulsification is significantly faster, and modern technology based surgical procedure for cataract treatment.Keywords: phacoemulsification, SICS, cataract, Bangladesh, visual outcome of SICS
Procedia PDF Downloads 3481073 The Development of Traffic Devices Using Natural Rubber in Thailand
Authors: Weeradej Cheewapattananuwong, Keeree Srivichian, Godchamon Somchai, Wasin Phusanong, Nontawat Yoddamnern
Abstract:
Natural rubber used for traffic devices in Thailand has been developed and researched for several years. When compared with Dry Rubber Content (DRC), the quality of Rib Smoked Sheet (RSS) is better. However, the cost of admixtures, especially CaCO₃ and sulphur, is higher than the cost of RSS itself. In this research, Flexible Guideposts and Rubber Fender Barriers (RFB) are taken into consideration. In case of flexible guideposts, the materials used are both RSS and DRC60%, but for RFB, only RSS is used due to the controlled performance tests. The objective of flexible guideposts and RFB is to decrease a number of accidents, fatal rates, and serious injuries. Functions of both devices are to save road users and vehicles as well as to absorb impact forces from vehicles so as to decrease of serious road accidents. This leads to the mitigation methods to remedy the injury of motorists, form severity to moderate one. The solution is to find the best practice of traffic devices using natural rubber under the engineering concepts. In addition, the performances of materials, such as tensile strength and durability, are calculated for the modulus of elasticity and properties. In the laboratory, the simulation of crashes, finite element of materials, LRFD, and concrete technology methods are taken into account. After calculation, the trials' compositions of materials are mixed and tested in the laboratory. The tensile test, compressive test, and weathering or durability test are followed and based on ASTM. Furthermore, the Cycle-Repetition Test of Flexible Guideposts will be taken into consideration. The final decision is to fabricate all materials and have a real test section in the field. In RFB test, there will be 13 crash tests, 7 Pickup Truck tests, and 6 Motorcycle Tests. The test of vehicular crashes happens for the first time in Thailand, applying the trial and error methods; for example, the road crash test under the standard of NCHRP-TL3 (100 kph) is changed to the MASH 2016. This is owing to the fact that MASH 2016 is better than NCHRP in terms of speed, types, and weight of vehicles and the angle of crash. In the processes of MASH, Test Level 6 (TL-6), which is composed of 2,270 kg Pickup Truck, 100 kph, and 25 degree of crash-angle is selected. The final test for real crash will be done, and the whole system will be evaluated again in Korea. The researchers hope that the number of road accidents will decrease, and Thailand will be no more in the top tenth ranking of road accidents in the world.Keywords: LRFD, load and resistance factor design, ASTM, american society for testing and materials, NCHRP, national cooperation highway research program, MASH, manual for assessing safety hardware
Procedia PDF Downloads 1281072 Numerical Study of Laminar Separation Bubble Over an Airfoil Using γ-ReθT SST Turbulence Model on Moderate Reynolds Number
Authors: Younes El Khchine
Abstract:
A parametric study has been conducted to analyse the flow around S809 airfoil of a wind turbine in order to better understand the characteristics and effects of laminar separation bubble (LSB) on aerodynamic design for maximizing wind turbine efficiency. Numerical simulations were performed at low Reynolds numbers by solving the Unsteady Reynolds Averaged Navier-Stokes (URANS) equations based on C-type structural mesh and using the γ-Reθt turbulence model. A two-dimensional study was conducted for the chord Reynolds number of 1×105 and angles of attack (AoA) between 0 and 20.15 degrees. The simulation results obtained for the aerodynamic coefficients at various angles of attack (AoA) were compared with XFoil results. A sensitivity study was performed to examine the effects of Reynolds number and free-stream turbulence intensity on the location and length of the laminar separation bubble and the aerodynamic performances of wind turbines. The results show that increasing the Reynolds number leads to a delay in the laminar separation on the upper surface of the airfoil. The increase in Reynolds number leads to an accelerated transition process, and the turbulent reattachment point moves closer to the leading edge owing to an earlier reattachment of the turbulent shear layer. This leads to a considerable reduction in the length of the separation bubble as the Reynolds number is increased. The increase in the level of free-stream turbulence intensity leads to a decrease in separation bubble length and an increase in the lift coefficient while having negligible effects on the stall angle. When the AoA increased, the bubble on the suction airfoil surface was found to move upstream to the leading edge of the airfoil, causing earlier laminar separation.Keywords: laminar separation bubble, turbulence intensity, s809 airfoil, transition model, Reynolds number
Procedia PDF Downloads 701071 Influence of Degassing on the Curing Behaviour and Void Occurrence Properties of Epoxy / Anhydride Resin System
Authors: Latha Krishnan, Andrew Cobley
Abstract:
Epoxy resin is most widely used as matrices for composites of aerospace, automotive and electronic applications due to its outstanding mechanical properties. These properties are chiefly predetermined by the chemical structure of the prepolymer and type of hardener but can also be varied by the processing conditions such as prepolymer and hardener mixing, degassing and curing conditions. In this research, the effect of degassing on the curing behaviour and the void occurrence is experimentally evaluated for epoxy /anhydride resin system. The epoxy prepolymer was mixed with an anhydride hardener and accelerator in an appropriate quantity. In order to investigate the effect of degassing on the curing behaviour and void content of the resin, the uncured resin samples were prepared using three different methods: 1) no degassing 2) degassing on prepolymer and 3) degassing on mixed solution of prepolymer and hardener with an accelerator. The uncured resins were tested in differential scanning calorimeter (DSC) to observe the changes in curing behaviour of the above three resin samples by analysing factors such as gel temperature, peak cure temperature and heat of reaction/heat flow in curing. Additionally, the completely cured samples were tested in DSC to identify the changes in the glass transition temperature (Tg) between the three samples. In order to evaluate the effect of degassing on the void content and morphology changes in the cured epoxy resin, the fractured surfaces of cured epoxy resin were examined under the scanning electron microscope (SEM). In addition, the amount of void, void geometry and void fraction were also investigated using an optical microscope and image J software (image analysis software). It was found that degassing at different stages of resin mixing had significant effects on properties such as glass transition temperature, the void content and void size of the epoxy/anhydride resin system. For example, degassing (vacuum applied on the mixed resin) has shown higher glass transition temperature (Tg) with lower void content.Keywords: anhydride epoxy, curing behaviour, degassing, void occurrence
Procedia PDF Downloads 2171070 Effects of Gamma-Tocotrienol Supplementation on T-Regulatory Cells in Syngeneic Mouse Model of Breast Cancer
Authors: S. Subramaniam, J. S. A. Rao, P. Ramdas, K. R. Selvaduray, N. M. Han, M. K. Kutty, A. K. Radhakrishnan
Abstract:
Immune system is a complex system where the immune cells have the capability to respond against a wide range of immune challenges including cancer progression. However, in the event of cancer development, tumour cells trigger immunosuppressive environment via activation of myeloid-derived suppressor cells and T regulatory (Treg) cells. The Treg cells are subset of CD4+ T lymphocytes, known to have crucial roles in regulating immune homeostasis and promoting the establishment and maintenance of peripheral tolerance. Dysregulation of these mechanisms could lead to cancer progression and immune suppression. Recently, there are many studies reporting on the effects of natural bioactive compounds on immune responses against cancer. It was known that tocotrienol-rich-fraction consisting 70% tocotrienols and 30% α-tocopherol is able to exhibit immunomodulatory as well as anti-cancer properties. Hence, this study was designed to evaluate the effects of gamma-tocotrienol (G-T3) supplementation on T-reg cells in a syngeneic mouse model of breast cancer. In this study, female BALB/c mice were divided into two groups and fed with either soy oil (vehicle) or gamma-tocotrienol (G-T3) for two weeks followed by inoculation with tumour cells. All the mice continued to receive the same supplementation until day 49. The results showed a significant reduction in tumour volume and weight in G-T3 fed mice compared to vehicle-fed mice. Lung and liver histology showed reduced evidence of metastasis in tumour-bearing G-T3 fed mice. Besides that, flow cytometry analysis revealed T-helper cell population was increased, and T-regulatory cell population was suppressed following G-T3 supplementation. Moreover, immunohistochemistry analysis showed that there was a marked decrease in the expression of FOXP3 in the G-T3 fed tumour bearing mice. In conclusion, the G-T3 supplementation showed good prognosis towards breast cancer by enhancing the immune response in tumour-bearing mice. Therefore, gamma-T3 can be used as immunotherapy agent for the treatment of breast cancer.Keywords: breast cancer, gamma tocotrienol, immune suppression, supplement
Procedia PDF Downloads 2221069 Organizational Inertia: As a Control Mechanism for Organizational Creativity And Agility In Disruptive Environment
Authors: Doddy T. P. Enggarsyah, Soebowo Musa
Abstract:
Covid-19 pandemic has changed business environments and has spread economic contagion rapidly, as the stringent lockdowns and social distancing, which were initially intended to cut off the spread, have instead cut off the flow of economies. With no existing experience or playbook to deal with such a crisis, the prolonged pandemic can lead to bankruptcies, despite the fact that there are cases of companies that are not only able to survive but also to increase sales and create more jobs amid the economic crisis. This quantitative research study clarifies conflicting findings on organizational inertia whether it is a better strategy to implement during a disruptive environment. 316 respondents who worked in diverse firms operating in various industry types in Indonesia have completed the survey with a response rate of 63.2%. Further, this study clarifies the roles and relationships between organizational inertia, organizational creativity, organizational agility, and organizational resilience that potentially have determinants factors on firm performance in a disruptive environment. The findings of the study confirm that the organizational inertia of the firm will set up strong protection on the organization's fundamental orientation, which eventually will confine organizations to build adequate creative and adaptability responses—such fundamental orientation built from path dependency along with past success and prolonged firm performance. Organizational inertia acts like a control mechanism to ensure the adequacy of the given responses. The term adequate is important, as being overly creative during a disruptive environment may have a contradictory result since it can burden the firm performance. During a disruptive environment, organizations will limit creativity by focusing more on creativity that supports the resilience and new technology adoption will be limited since the cost of learning and implementation are perceived as greater than the potential gains. The optimal path towards firm performance is gained through organizational resilience, as in a disruptive environment, the survival of the organization takes precedence over firm performance.Keywords: disruptive environment, organizational agility, organizational creativity, organizational inertia, organizational resilience
Procedia PDF Downloads 1121068 A Regional Analysis on Co-movement of Sovereign Credit Risk and Interbank Risks
Authors: Mehdi Janbaz
Abstract:
The global financial crisis and the credit crunch that followed magnified the importance of credit risk management and its crucial role in the stability of all financial sectors and the whole of the system. Many believe that risks faced by the sovereign sector are highly interconnected with banking risks and most likely to trigger and reinforce each other. This study aims to examine (1) the impact of banking and interbank risk factors on the sovereign credit risk of Eurozone, and (2) how the EU Credit Default Swaps spreads dynamics are affected by the Crude Oil price fluctuations. The hypothesizes are tested by employing fitting risk measures and through a four-staged linear modeling approach. The sovereign senior 5-year Credit Default Swap spreads are used as a core measure of the credit risk. The monthly time-series data of the variables used in the study are gathered from the DataStream database for a period of 2008-2019. First, a linear model test the impact of regional macroeconomic and market-based factors (STOXX, VSTOXX, Oil, Sovereign Debt, and Slope) on the CDS spreads dynamics. Second, the bank-specific factors, including LIBOR-OIS spread (the difference between the Euro 3-month LIBOR rate and Euro 3-month overnight index swap rates) and Euribor, are added to the most significant factors of the previous model. Third, the global financial factors including EURO to USD Foreign Exchange Volatility, TED spread (the difference between 3-month T-bill and the 3-month LIBOR rate based in US dollars), and Chicago Board Options Exchange (CBOE) Crude Oil Volatility Index are added to the major significant factors of the first two models. Finally, a model is generated by a combination of the major factor of each variable set in addition to the crisis dummy. The findings show that (1) the explanatory power of LIBOR-OIS on the sovereign CDS spread of Eurozone is very significant, and (2) there is a meaningful adverse co-movement between the Crude Oil price and CDS price of Eurozone. Surprisingly, adding TED spread (the difference between the three-month Treasury bill and the three-month LIBOR based in US dollars.) to the analysis and beside the LIBOR-OIS spread (the difference between the Euro 3M LIBOR and Euro 3M OIS) in third and fourth models has been increased the predicting power of LIBOR-OIS. Based on the results, LIBOR-OIS, Stoxx, TED spread, Slope, Oil price, OVX, FX volatility, and Euribor are the determinants of CDS spreads dynamics in Eurozone. Moreover, the positive impact of the crisis period on the creditworthiness of the Eurozone is meaningful.Keywords: CDS, crude oil, interbank risk, LIBOR-OIS, OVX, sovereign credit risk, TED
Procedia PDF Downloads 1441067 Spatial Planning and Tourism Development with Sustainability Model of the Territorial Tourist with Land Use Approach
Authors: Mehrangiz Rezaee, Zabih Charrahi
Abstract:
In the last decade, with increasing tourism destinations and tourism growth, we are witnessing the widespread impacts of tourism on the economy, environment and society. Tourism and its related economy are now undergoing a transformation and as one of the key pillars of business economics, it plays a vital role in the world economy. Activities related to tourism and providing services appropriate to it in an area, like many economic sectors, require the necessary context on its origin. Given the importance of tourism industry and tourism potentials of Yazd province in Iran, it is necessary to use a proper procedure for prioritizing different areas for proper and efficient planning. One of the most important goals of planning is foresight and creating balanced development in different geographical areas. This process requires an accurate study of the areas and potential and actual talents, as well as evaluation and understanding of the relationship between the indicators affecting the development of the region. At the global and regional level, the development of tourist resorts and the proper distribution of tourism destinations are needed to counter environmental impacts and risks. The main objective of this study is the sustainable development of suitable tourism areas. Given that tourism activities in different territorial areas require operational zoning, this study deals with the evaluation of territorial tourism using concepts such as land use, fitness and sustainable development. It is essential to understand the structure of tourism development and the spatial development of tourism using land use patterns, spatial planning and sustainable development. Tourism spatial planning implements different approaches. However, the development of tourism as well as the spatial development of tourism is complex, since tourist activities can be carried out in different areas with different purposes. Multipurpose areas have great important for tourism because it determines the flow of tourism. Therefore, in this paper, by studying the development and determination of tourism suitability that is related to spatial development, it is possible to plan tourism spatial development by developing a model that describes the characteristics of tourism. The results of this research determine the suitability of multi-functional territorial tourism development in line with spatial planning of tourism.Keywords: land use change, spatial planning, sustainability, territorial tourist, Yazd
Procedia PDF Downloads 1831066 Multidisciplinary Approach for a Tsunami Reconstruction Plan in Coquimbo, Chile
Authors: Ileen Van den Berg, Reinier J. Daals, Chris E. M. Heuberger, Sven P. Hildering, Bob E. Van Maris, Carla M. Smulders, Rafael Aránguiz
Abstract:
Chile is located along the subduction zone of the Nazca plate beneath the South American plate, where large earthquakes and tsunamis have taken place throughout history. The last significant earthquake (Mw 8.2) occurred in September 2015 and generated a destructive tsunami, which mainly affected the city of Coquimbo (71.33°W, 29.96°S). The inundation area consisted of a beach, damaged seawall, damaged railway, wetland and old neighborhood; therefore, local authorities started a reconstruction process immediately after the event. Moreover, a seismic gap has been identified in the same area, and another large event could take place in the near future. The present work proposed an integrated tsunami reconstruction plan for the city of Coquimbo that considered several variables such as safety, nature & recreation, neighborhood welfare, visual obstruction, infrastructure, construction process, and durability & maintenance. Possible future tsunami scenarios are simulated by means of the Non-hydrostatic Evolution of Ocean WAVEs (NEOWAVE) model with 5 nested grids and a higher grid resolution of ~10 m. Based on the score from a multi-criteria analysis, the costs of the alternatives and a preference for a multifunctional solution, the alternative that includes an elevated coastal road with floodgates to reduce tsunami overtopping and control the return flow of a tsunami was selected as the best solution. It was also observed that the wetlands are significantly restored to their former configuration; moreover, the dynamic behavior of the wetlands is stimulated. The numerical simulation showed that the new coastal protection decreases damage and the probability of loss of life by delaying tsunami arrival time. In addition, new evacuation routes and a smaller inundation zone in the city increase safety for the area.Keywords: tsunami, Coquimbo, Chile, reconstruction, numerical simulation
Procedia PDF Downloads 2411065 Numerical Study of Laminar Separation Bubble Over an Airfoil Using γ-ReθT SST Turbulence Model on Moderate Reynolds Number
Authors: Younes El Khchine, Mohammed Sriti
Abstract:
A parametric study has been conducted to analyse the flow around S809 airfoil of wind turbine in order to better understand the characteristics and effects of laminar separation bubble (LSB) on aerodynamic design for maximizing wind turbine efficiency. Numerical simulations were performed at low Reynolds number by solving the Unsteady Reynolds Averaged Navier-Stokes (URANS) equations based on C-type structural mesh and using γ-Reθt turbulence model. Two-dimensional study was conducted for the chord Reynolds number of 1×105 and angles of attack (AoA) between 0 and 20.15 degrees. The simulation results obtained for the aerodynamic coefficients at various angles of attack (AoA) were compared with XFoil results. A sensitivity study was performed to examine the effects of Reynolds number and free-stream turbulence intensity on the location and length of laminar separation bubble and aerodynamic performances of wind turbine. The results show that increasing the Reynolds number leads to a delay in the laminar separation on the upper surface of the airfoil. The increase in Reynolds number leads to an accelerate transition process and the turbulent reattachment point move closer to the leading edge owing to an earlier reattachment of the turbulent shear layer. This leads to a considerable reduction in the length of the separation bubble as the Reynolds number is increased. The increase of the level of free-stream turbulence intensity leads to a decrease in separation bubble length and an increase the lift coefficient while having negligible effects on the stall angle. When the AoA increased, the bubble on the suction airfoil surface was found to moves upstream to leading edge of the airfoil that causes earlier laminar separation.Keywords: laminar separation bubble, turbulence intensity, S809 airfoil, transition model, Reynolds number
Procedia PDF Downloads 851064 Investigation of FOXM1 Gene Expression in Breast Cancer and Its Relationship with Mir-216B-5P Expression Level
Authors: Ramin Mehdiabadi, Neda Menbari, Mohammad Nazir Menbari
Abstract:
As a pressing public health concern, breast cancer stands as the predominant oncological diagnosis and principal cause of cancer-related mortality among women globally, accounting for 11.7% of new cancer incidences and 6.9% of cancer-related deaths. The annual figures indicate that approximately 230,480 women are diagnosed with breast cancer in the United States alone, with 39,520 succumbing to the disease. While developed economies have reported a deceleration in both incidence and mortality rates across various forms of cancer, including breast cancer, emerging and low-income economies manifest a contrary escalation, largely attributable to lifestyle-mediated risk factors such as tobacco usage, physical inactivity, and high caloric intake. Breast cancer is distinctly characterized by molecular heterogeneity, manifesting in specific subtypes delineated by biomarkers—Estrogen Receptors (ER), Progesterone Receptors (PR), and Human Epidermal Growth Factor Receptor 2 (HER2). These subtypes, comprising Luminal A, Luminal B, HER2-enriched, triple-negative/basal-like, and normal-like, necessitate nuanced, subtype-specific therapeutic regimens, thereby challenging the applicability of generalized treatment protocols. Within this molecular complexity, the transcription factor Forkhead Box M1 (FoxM1) has garnered attention as a significant driver of cellular proliferation, tumorigenesis, metastatic progression, and treatment resistance in a spectrum of human malignancies, including breast cancer. Concurrently, microRNAs (miRs), specifically miR-216b-5p, have been identified as post-transcriptional gene expression regulators and potential tumor suppressors. The overarching objective of this academic investigation is to explicate the multifaceted interrelationship between FoxM1 and miR-216b-5p across the disparate molecular subtypes of breast cancer. Employing a methodologically rigorous, interdisciplinary research design that incorporates cutting-edge molecular biology techniques, sophisticated bioinformatics analytics, and exhaustive meta-analyses of extant clinical data, this scholarly endeavor aims to unveil novel biomarker-specific therapeutic pathways. By doing so, this research is positioned to make a seminal contribution to the advancement of personalized, efficacious, and minimally toxic treatment paradigms, thus profoundly impacting the global efforts to ameliorate the burden of breast cancer.Keywords: breast cancer, fox m1, microRNAs, mir-216b-5p, gene expression
Procedia PDF Downloads 741063 Antimicrobial Activity of 2-Nitro-1-Propanol and Lauric Acid against Gram-Positive Bacteria
Authors: Robin Anderson, Elizabeth Latham, David Nisbet
Abstract:
Propagation and dissemination of antimicrobial resistant and pathogenic microbes from spoiled silages and composts represents a serious public health threat to humans and animals. In the present study, the antimicrobial activity of the short chain nitro-compound, 2-nitro-1-propanol (9 mM) as well as the medium chain fatty acid, lauric acid, and its glycerol monoester, monolaurin, (each at 25 and 17 µmol/mL, respectfully) were investigated against select pathogenic and multi-drug resistant antimicrobial resistant Gram-positive bacteria common to spoiled silages and composts. In an initial study, we found that growth rates of a multi-resistant Enterococcus faecalis (expressing resistance against erythromycin, quinupristin/dalfopristin and tetracycline) and Staphylococcus aureus strain 12600 (expressing resistance against erythromycin, linezolid, penicillin, quinupristin/dalfopristin and vancomycin) were more than 78% slower (P < 0.05) by 2-nitro-1-propanol treatment during culture (n = 3/treatment) in anaerobically prepared ½ strength Brain Heart Infusion broth at 37oC when compared to untreated controls (0.332 ± 0.04 and 0.108 ± 0.03 h-1, respectively). The growth rate of 2-nitro-1-propanol-treated Listeria monocytogenes was also decreased by 96% (P < 0.05) when compared to untreated controls cultured similarly (0.171 ± 0.01 h-1). Maximum optical densities measured at 600 nm were lower (P < 0.05) in 2-nitro-1-propanol-treated cultures (0.053 ± 0.01, 0.205 ± 0.02 and 0.041 ± 0.01, respectively) than in untreated controls (0.483 ± 0.02, 0.523 ± 0.01 and 0.427 ± 0.01, respectively) for E. faecalis, S. aureus and L. monocytogenes, respectively. When tested against mixed microbial populations during anaerobic 24 h incubation of spoiled silage, significant effects of treatment with 1 mg 2-nitro-1-propanol (approximately 9.5 µmol/g) or 5 mg lauric acid/g (approximately 25 µmol/g) on populations of wildtype Enterococcus and Listeria were not observed. Mixed populations treated with 5 mg monolaurin/g (approximately 17 µmol/g) had lower (P < 0.05) viable cell counts of wildtype enterococci than untreated controls after 6 h incubation (2.87 ± 1.03 versus 5.20 ± 0.25 log10 colony forming units/g, respectively) but otherwise significant effects of monolaurin were not observed. These results reveal differential susceptibility of multi-drug resistant enterococci and staphylococci as well as L. monocytogenes to the inhibitory activity of 2-nitro-1-propanol and the medium chain fatty acid, lauric acid and its glycerol monoester, monolaurin. Ultimately, these results may lead to improved treatment technologies to preserve the microbiological safety of silages and composts.Keywords: 2-nitro-1-propanol, lauric acid, monolaurin, gram positive bacteria
Procedia PDF Downloads 1091062 Evaluation of the Impact of Reducing the Traffic Light Cycle for Cars to Improve Non-Vehicular Transportation: A Case of Study in Lima
Authors: Gheyder Concha Bendezu, Rodrigo Lescano Loli, Aldo Bravo Lizano
Abstract:
In big urbanized cities of Latin America, motor vehicles have priority over non-motor vehicles and pedestrians. There is an important problem that affects people's health and quality of life; lack of inclusion towards pedestrians makes it difficult for them to move smoothly and safely since the city has been planned for the transit of motor vehicles. Faced with the new trend for sustainable and economical transport, the city is forced to develop infrastructure in order to incorporate pedestrians and users with non-motorized vehicles in the transport system. The present research aims to study the influence of non-motorized vehicles on an avenue, the optimization of a cycle using traffic lights based on simulation in Synchro software, to improve the flow of non-motor vehicles. The evaluation is of the microscopic type; for this reason, field data was collected, such as vehicular, pedestrian, and non-motor vehicle user demand. With the values of speed and travel time, it is represented in the current scenario that contains the existing problem. These data allow to create a microsimulation model in Vissim software, later to be calibrated and validated so that it has a behavior similar to reality. The results of this model are compared with the efficiency parameters of the proposed model; these parameters are the queue length, the travel speed, and mainly the travel times of the users at this intersection. The results reflect a reduction of 27% in travel time, that is, an improvement between the proposed model and the current one for this great avenue. The tail length of motor vehicles is also reduced by 12.5%, a considerable improvement. All this represents an improvement in the level of service and in the quality of life of users.Keywords: bikeway, microsimulation, pedestrians, queue length, traffic light cycle, travel time
Procedia PDF Downloads 1761061 Comparison of Cyclone Design Methods for Removal of Fine Particles from Plasma Generated Syngas
Authors: Mareli Hattingh, I. Jaco Van der Walt, Frans B. Waanders
Abstract:
A waste-to-energy plasma system was designed by Necsa for commercial use to create electricity from unsorted municipal waste. Fly ash particles must be removed from the syngas stream at operating temperatures of 1000 °C and recycled back into the reactor for complete combustion. A 2D2D high efficiency cyclone separator was chosen for this purpose. During this study, two cyclone design methods were explored: The Classic Empirical Method (smaller cyclone) and the Flow Characteristics Method (larger cyclone). These designs were optimized with regard to efficiency, so as to remove at minimum 90% of the fly ash particles of average size 10 μm by 50 μm. Wood was used as feed source at a concentration of 20 g/m3 syngas. The two designs were then compared at room temperature, using Perspex test units and three feed gases of different densities, namely nitrogen, helium and air. System conditions were imitated by adapting the gas feed velocity and particle load for each gas respectively. Helium, the least dense of the three gases, would simulate higher temperatures, whereas air, the densest gas, simulates a lower temperature. The average cyclone efficiencies ranged between 94.96% and 98.37%, reaching up to 99.89% in individual runs. The lowest efficiency attained was 94.00%. Furthermore, the design of the smaller cyclone proved to be more robust, while the larger cyclone demonstrated a stronger correlation between its separation efficiency and the feed temperatures. The larger cyclone can be assumed to achieve slightly higher efficiencies at elevated temperatures. However, both design methods led to good designs. At room temperature, the difference in efficiency between the two cyclones was almost negligible. At higher temperatures, however, these general tendencies are expected to be amplified so that the difference between the two design methods will become more obvious. Though the design specifications were met for both designs, the smaller cyclone is recommended as default particle separator for the plasma system due to its robust nature.Keywords: Cyclone, design, plasma, renewable energy, solid separation, waste processing
Procedia PDF Downloads 2141060 Reduction of Defects Using Seven Quality Control Tools for Productivity Improvement at Automobile Company
Authors: Abdul Sattar Jamali, Imdad Ali Memon, Maqsood Ahmed Memon
Abstract:
Quality of production near to zero defects is an objective of every manufacturing and service organization. In order to maintain and improve the quality by reduction in defects, Statistical tools are being used by any organizations. There are many statistical tools are available to assess the quality. Keeping in view the importance of many statistical tools, traditional 7QC tools has been used in any manufacturing and automobile Industry. Therefore, the 7QC tools have been successfully applied at one of the Automobile Company Pakistan. Preliminary survey has been done for the implementation of 7QC tool in the assembly line of Automobile Industry. During preliminary survey two inspection points were decided to collect the data, which are Chassis line and trim line. The data for defects at Chassis line and trim line were collected for reduction in defects which ultimately improve productivity. Every 7QC tools has its benefits observed from the results. The flow charts developed for better understanding about inspection point for data collection. The check sheets developed for helps for defects data collection. Histogram represents the severity level of defects. Pareto charts show the cumulative effect of defects. The Cause and Effect diagrams developed for finding the root causes of each defects. Scatter diagram developed the relation of defects increasing or decreasing. The P-Control charts developed for showing out of control points beyond the limits for corrective actions. The successful implementation of 7QC tools at the inspection points at Automobile Industry concluded that the considerable amount of reduction on defects level, as in Chassis line from 132 defects to 13 defects. The total 90% defects were reduced in Chassis Line. In Trim line defects were reduced from 157 defects to 28 defects. The total 82% defects were reduced in Trim Line. As the Automobile Company exercised only few of the 7 QC tools, not fully getting the fruits by the application of 7 QC tools. Therefore, it is suggested the company may need to manage a mechanism for the application of 7 QC tools at every section.Keywords: check sheet, cause and effect diagram, control chart, histogram
Procedia PDF Downloads 3261059 Exploring the Differences between Self-Harming and Suicidal Behaviour in Women with Complex Mental Health Needs
Authors: Sophie Oakes-Rogers, Di Bailey, Karen Slade
Abstract:
Female offenders are a uniquely vulnerable group, who are at high risk of suicide. Whilst the prevention of self-harm and suicide remains a key global priority, we need to better understand the relationship between these challenging behaviours that constitute a pressing problem, particularly in environments designed to prioritise safety and security. Method choice is unlikely to be random, and is instead influenced by a range of cultural, social, psychological and environmental factors, which change over time and between countries. A key aspect of self-harm and suicide in women receiving forensic care is the lack of free access to methods. At a time where self-harm and suicide rates continue to rise internationally, understanding the role of these influencing factors and the impact of current suicide prevention strategies on the use of near-lethal methods is crucial. This poster presentation will present findings from 25 interviews and 3 focus groups, which enlisted a Participatory Action Research approach to explore the differences between self-harming and suicidal behavior. A key element of this research was using the lived experiences of women receiving forensic care from one forensic pathway in the UK, and the staffs who care for them, to discuss the role of near-lethal self-harm (NLSH). The findings and suggestions from the lived accounts of the women and staff will inform a draft assessment tool, which better assesses the risk of suicide based on the lethality of methods. This tool will be the first of its kind, which specifically captures the needs of women receiving forensic services. Preliminary findings indicate women engage in NLSH for two key reasons and is determined by their history of self-harm. Women who have a history of superficial non-life threatening self-harm appear to engage in NLSH in response to a significant life event such as family bereavement or sentencing. For these women, suicide appears to be a realistic option to overcome their distress. This, however, differs from women who appear to have a lifetime history of NLSH, who engage in such behavior in a bid to overcome the grief and shame associated with historical abuse. NLSH in these women reflects a lifetime of suicidality and indicates they pose the greatest risk of completed suicide. Findings also indicate differences in method selection between forensic provisions. Restriction of means appears to play a role in method selection, and findings suggest it causes method substitution. Implications will be discussed relating to the screening of female forensic patients and improvements to the current suicide prevention strategies.Keywords: forensic mental health, method substitution, restriction of means, suicide
Procedia PDF Downloads 1781058 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method
Authors: Rui Wu
Abstract:
In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning
Procedia PDF Downloads 1081057 Socioeconomic Disparities in the Prevalence of Obesity in Adults with Diabetes in Israel
Authors: Yael Wolff Sagy, Yiska Loewenberg Weisband, Vered Kaufman Shriqui, Michal Krieger, Arie Ben Yehuda, Ronit Calderon Margalit
Abstract:
Background: Obesity is both a risk factor and common comorbidity of diabetes. Obesity impedes the achievement of glycemic control, and enhances damage caused by hyperglycemia to blood vessels; thus it increases diabetes-related complications. This study assessed the prevalence of obesity and morbid obesity among Israeli adults with diabetes, and estimated disparities associated with sex and socioeconomic position (SEP). Methods: A cross-sectional study was conducted in the setting of the Israeli National Program for Quality Indicators in Community Healthcare. Data on all the Israeli population is retrieved from electronic medical records of the four health maintenance organizations (HMOs). The study population included all Israeli patients with diabetes aged 20-64 with documented body mass index (BMI) in 2016 (N=180,451). Diabetes was defined as the existence of one or more of the following criteria: (a) Plasma glucose level >200 mg% in at least two tests conducted at least one month apart in the previous year; (b) HbA1c>6.5% at least once in the previous year (c) at least three prescriptions of diabetes medications were dispensed during the previous year. Two measures were included: the prevalence of obesity (defined as last BMI≥ 30 kg/m2 and <35 kg/m2) and the prevalence of morbid obesity (defined as last BMI≥ 35 kg/m2) in individuals aged 20-64 with diabetes. The cut-off value for morbid obesity was set in accordance with the eligibility criteria for bariatric surgery in diabetics. Data were collected by the HMOs and aggregated by age, sex and SEP. SEP was based on statistical areas ranking by the Israeli Central Bureau of Statistics and divided into 4 categories, ranking from 1 (lowest) to 4 (highest). Results: BMI documentation among adults with diabetes was 84.9% in 2016. The prevalence of obesity in the study population was 30.5%. Although the overall rate was similar in both sexes (30.8% in females, 30.3% in males), SEP disparities were stronger in females (32.7% in SEP level 1 vs. 27.7% in SEP level 4; 18.1% relative difference) compared to males (30.6% in SEP level 1 vs. 29.3% in SEP level 4; 4.4% relative difference). The overall prevalence of morbid obesity in this population was 20.8% in 2016. The rate among females was almost double compared to the rate in males (28.1% and 14.6%, respectively). In both sexes, the prevalence of morbid obesity was strongly associated with lower SEP. However, in females, disparities between SEP levels were much stronger (34.3% in SEP level 1 vs. 18.7% in SEP level 4; 83.4% relative difference) compared to SEP-disparities in males (15.7% in SEP level 1 vs. 12.3% in SEP level 4; 27.6% relative difference). Conclusions: The overall prevalence of BMI≥ 30 kg/m2 among adults with diabetes in Israel exceeds 50%; and the prevalence of morbid obesity suggests that 20% meet the BMI-criteria for bariatric surgery. Prevalence rates show major SEP- and sex-disparities; especially strong SEP disparities in morbid obesity among females. These findings highlight the need for greater consideration of different population groups when implementing interventions.Keywords: diabetes, health disparities, health policy, obesity, socio-economic position
Procedia PDF Downloads 215