Search results for: co-processing of signaling parameters
741 Application of Nuclear Magnetic Resonance (1H-NMR) in the Analysis of Catalytic Aquathermolysis: Colombian Heavy Oil Case
Authors: Paola Leon, Hugo Garcia, Adan Leon, Samuel Munoz
Abstract:
The enhanced oil recovery by steam injection was considered a process that only generated physical recovery mechanisms. However, there is evidence of the occurrence of a series of chemical reactions, which are called aquathermolysis, which generates hydrogen sulfide, carbon dioxide, methane, and lower molecular weight hydrocarbons. These reactions can be favored by the addition of a catalyst during steam injection; in this way, it is possible to generate the original oil in situ upgrading through the production increase of molecules of lower molecular weight. This additional effect could increase the oil recovery factor and reduce costs in transport and refining stages. Therefore, this research has focused on the experimental evaluation of the catalytic aquathermolysis on a Colombian heavy oil with 12,8°API. The effects of three different catalysts, reaction time, and temperature were evaluated in a batch microreactor. The changes in the Colombian heavy oil were quantified through nuclear magnetic resonance 1H-NMR. The relaxation times interpretation and the absorption intensity allowed to identify the distribution of the functional groups in the base oil and upgraded oils. Additionally, the average number of aliphatic carbons in alkyl chains, the number of substituted rings, and the aromaticity factor were established as average structural parameters in order to simplify the samples' compositional analysis. The first experimental stage proved that each catalyst develops a different reaction mechanism. The aromaticity factor has an increasing order of the salts used: Mo > Fe > Ni. However, the upgraded oil obtained with iron naphthenate tends to form a higher content of mono-aromatic and lower content of poly-aromatic compounds. On the other hand, the results obtained from the second phase of experiments suggest that the upgraded oils have a smaller difference in the length of alkyl chains in the range of 240º to 270°C. This parameter has lower values at 300°C, which indicates that the alkylation or cleavage reactions of alkyl chains govern at higher reaction temperatures. The presence of condensation reactions is supported by the behavior of the aromaticity factor and the bridge carbons production between aromatic rings (RCH₂). Finally, it is observed that there is a greater dispersion in the aliphatic hydrogens, which indicates that the alkyl chains have a greater reactivity compared to the aromatic structures.Keywords: catalyst, upgrading, aquathermolysis, steam
Procedia PDF Downloads 110740 Xylanase Impact beyond Performance: A Prebiotic Approach in Laying Hens
Authors: Veerle Van Hoeck, Ingrid Somers, Dany Morisset
Abstract:
Anti-nutritional factors such as non-starch polysaccharides (NSP) are present in viscous cereals used to feed poultry. Therefore, exogenous carbohydrases are commonly added to monogastric feed to degrade these NSP. Our hypothesis is that xylanase not only improves laying hen performance and digestibility but also induces a significant shift in microbial composition within the intestinal tract and, thereby, can cause a prebiotic effect. In this context, a better understanding of whether and how the chicken gut flora can be modulated by xylanase is needed. To do so, in the herein laying hen study, the effects of dietary supplementation of xylanase on performance, digestibility, and cecal microbiome were evaluated. A total of 96 HiSex laying hens was used in this experiment (3 diets and 16 replicates of 2 hens). Xylanase was added to the diets at concentrations of 0, 45,000 (15 g/t XygestTM HT) and 90,000 U/kg (30 g/t Xygest HT). The diets were based on wheat (~55 %), soybean, and sunflower meal. The lowest dosage, 45,000 U/kg, significantly increased average egg weight and improved feed efficiency compared to the control treatment (p < 0.05). Egg quality parameters were significantly improved in the experiment in response to the xylanase addition. For example, during the last 28 days of the trial, the 45,000 U/kg and the 90,000 U/kg treatments exhibited an increase in Haugh units and albumin heights (p < 0.05). Compared with the control, organic matter digestibility and N retention were drastically improved in the 45,000 U/kg treatment group, which implies better nutrient digestibility at this lowest recommended dosage compared to the control (p < 0.05). Furthermore, gross energy and crude fat digestibility were improved significantly for birds fed 90,000 U/kg group compared to the control. Importantly, 16S rRNA gene analysis revealed that xylanase at 45,000 U/kg dosages can exert a prebiotic effect. This conclusion was drawn based on studying the sequence variation in the 16S rRNA gene in order to characterize diverse microbial communities of the cecal content. A significant increase in beneficial bacteria (Lactobacilli spp and Enterococcus casseliflavus) was documented when adding 45,000 U/kg xylanase to the diet of laying hens. In conclusion, dietary supplementation of xylanase, even at the lowest dose of (45,000 U/kg), significantly improved laying hen performance and digestibility. Furthermore, it is generally accepted that a proper bacterial balance between the number of beneficial bacteria and pathogenic bacteria in the intestine is vital for the host. It seems that the xylanase enzyme is able to modulate the laying hen microbiome beneficially and thus exerts a prebiotic effect. This microbiome plasticity in response to the xylanase provides an attractive target for stimulating intestinal health.Keywords: laying hen, prebiotic, XygestTM HT, xylanase
Procedia PDF Downloads 128739 Glutamine Supplementation and Resistance Traning on Anthropometric Indices, Immunoglobulins, and Cortisol Levels
Authors: Alireza Barari, Saeed Shirali, Ahmad Abdi
Abstract:
Introduction: Exercise has contradictory effects on the immune system. Glutamine supplementation may increase the resistance of the immune system in athletes. The Glutamine is one of the most recognized immune nutrients that as a fuel source, substrate in the synthesis of nucleotides and amino acids and is also known to be part of the antioxidant defense. Several studies have shown that improving glutamine levels in plasma and tissues can have beneficial effects on the function of immune cells such as lymphocytes and neutrophils. This study aimed to investigate the effects of resistance training and training combined with glutamine supplementation to improve the levels of cortisol and immunoglobulin in untrained young men. The research shows that physical training can increase the cytokines in the athlete’s body of course; glutamine can counteract the negative effects of resistance training on immune function and stability of the mast cell membrane. Materials and methods: This semi-experimental study was conducted on 30 male non-athletes. They were randomly divided into three groups: control (no exercise), resistance training, resistance training and glutamine supplementation, respectively. Resistance training for 4 weeks and glutamine supplementation in 0.3 gr/kg/day after practice was applied. The resistance-training program consisted of eight exercises (leg press, lat pull, chest press, squat, seatedrow, abdominal crunch, shoulder press, biceps curl and triceps press down) four times per week. Participants performed 3 sets of 10 repetitions at 60–75% 1-RM. Anthropometry indexes (weight, body mass index, and body fat percentage), oxygen uptake (VO2max) Maximal, cortisol levels of immunoglobulins (IgA, IgG, IgM) were evaluated Pre- and post-test. Results: Results showed four week resistance training with and without glutamine cause significant increase in body weight, BMI and significantly decreased (P < 0/001) in BF. Vo2max also increased in both groups of exercise (P < 0/05) and exercise with glutamine (P < 0/001), such as in both groups significant reduction in IgG (P < 0/05) was observed. But no significant difference observed in levels of cortisol, IgA, IgM in any of the groups. No significant change observed in either parameter in the control group. No significant difference observed between the groups. Discussion: The alterations in the hormonal and immunological parameters can be used in order to assess the effect overload on the body, whether acute or chronically. The plasmatic concentration of glutamine has been associated to the functionality of the immunological system in individuals sub-mitted to intense physical training. resistance training has destructive effects on the immune system and glutamine supplementation cannot neutralize the damaging effects of power exercise on the immune system.Keywords: glutamine, resistance traning, immuglobulins, cortisol
Procedia PDF Downloads 479738 Progressive Damage Analysis of Mechanically Connected Composites
Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan
Abstract:
While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values , and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.Keywords: puck, finite element, bolted joint, composite
Procedia PDF Downloads 102737 Study of Open Spaces in Urban Residential Clusters in India
Authors: Renuka G. Oka
Abstract:
From chowks to streets to verandahs to courtyards; residential open spaces are very significantly placed in traditional urban neighborhoods of India. At various levels of intersection, the open spaces with their attributes like juxtaposition with the built fabric, scale, climate sensitivity and response, multi-functionality, etc. reflect and respond to the patterns of human interactions. Also, these spaces tend to be quite well utilized. On the other hand, it is a common specter to see an imbalanced utilization of open spaces in newly/recently planned residential clusters. This is maybe due to lack of activity generators around or wrong locations or excess provisions or improper incorporation of aforementioned design attributes. These casual observations suggest the necessity for a systematic study of current residential open spaces. The exploratory study thus attempts to draw lessons through a structured inspection of residential open spaces to understand the effective environment as revealed through their use patterns. Here, residential open spaces are considered in a wider sense to incorporate all the un-built fabric around. These thus, include both use spaces and access space. For the study, open spaces in ten exemplary housing clusters/societies built during the last ten years across India are studied. A threefold inquiry is attempted in this direction. The first relates to identifying and determining the effects of various physical functions like space organization, size, hierarchy, thermal and optical comfort, etc. on the performance of residential open spaces. The second part sets out to understand socio-cultural variations in values, lifestyle, and beliefs which determine activity choices and behavioral preferences of users for respective residential open spaces. The third inquiry further observes the application of these research findings to the design process to derive meaningful and qualitative design advice. However, the study also emphasizes to develop a suitable framework of analysis and to carve out appropriate methods and approaches to probe into these aspects of the inquiry. Given this emphasis, a considerable portion of the research details out the conceptual framework for the study. This framework is supported by an in-depth search of available literature. The findings are worked out for design solutions which integrate the open space systems with the overall design process for residential clusters. The open spaces in residential areas present great complexities both in terms of their use patterns and determinants of their functional responses. The broad aim of the study is, therefore, to arrive at reconsideration of standards and qualitative parameters used by designers – on the basis of more substantial inquiry into the use patterns of open spaces in residential areas.Keywords: open spaces, physical and social determinants, residential clusters, use patterns
Procedia PDF Downloads 148736 Assessment of Soil Quality Indicators in Rice Soils Under Rainfed Ecosystem
Authors: R. Kaleeswari
Abstract:
An investigation was carried out to assess the soil biological quality parameters in rice soils under rainfed and to compare soil quality indexing methods viz., Principal component analysis, Minimum data set and Indicator scoring method and to develop soil quality indices for formulating soil and crop management strategies.Soil samples were collected and analyzed for soil biological properties by adopting standard procedure. Biological indicators were determined for soil quality assessment, viz., microbial biomass carbon and nitrogen (MBC and MBN), potentially mineralizable nitrogen (PMN) and soil respiration and dehydrogenease activity. Among the methods of rice cultivation, Organic nutrition, Integrated Nutrient Management (INM) and System of Rice Intensification (SRI ), rice cultivation registered higher values of MBC, MBN and PMN. Mechanical and conventional rice cultivation registered lower values of biological quality indicators. Organic nutrient management and INM enhanced the soil respiration rate. SRI and aerobic rice cultivation methods increased the rate of soil respiration, while conventional and mechanical rice farming lowered the soil respiration rate. Dehydrogenase activity (DHA) was registered to be higher in soils under organic nutrition and Integrated Nutrient Management INM. System of Rice Intensification SRI and aerobic rice cultivation enhanced the DHA; while conventional and mechanical rice cultivation methods reduced DHA. The microbial biomass carbon (MBC) of the rice soils varied from 65 to 244 mg kg-1. Among the nutrient management practices, INM registered the highest available microbial biomass carbon of 285 mg kg-1.Potentially mineralizable N content of the rice soils varied from 20.3 to 56.8 mg kg-1. Aerobic rice farming registered the highest potentially mineralizable N of 78.9 mg kg-1..The soil respiration rate of the rice soils varied from 60 to 125 µgCO2 g-1. Nutrient management practices ofINM practice registered the highest. soil respiration rate of 129 µgCO2 g-1.The dehydrogenase activity of the rice soils varied from 38.3 to 135.3µgTPFg-1 day-1. SRI method of rice cultivation registered the highest dehydrogenase activity of 160.2 µgTPFg-1 day-1. Soil variables from each PC were considered for minimum soil data set (MDS). Principal component analysis (PCA) was used to select the representative soil quality indicators. In intensive rice cultivating regions, soil quality indicators were selected based on factor loading value and contribution percentage value using principal component analysis (PCA).Variables having significant difference within production systems were used for the preparation of minimum data set (MDS).Keywords: soil quality, rice, biological properties, PCA analysis
Procedia PDF Downloads 110735 Role of Zinc Adminstration in Improvement of Faltering Growth in Egyption Children at Risk of Environmental Enteric Dysfunction
Authors: Ghada Mahmoud El Kassas, Maged Atta El Wakeel
Abstract:
Background: Environmental enteric dysfunction (EED) is impending trouble that flared up in the last decades to be pervasive in infants and children. EED is asymptomatic villous atrophy of the small bowel that is prevalent in the developing world and is associated with altered intestinal function and integrity. Evidence has suggested that supplementary zinc might ameliorate this damage by reducing gastrointestinal inflammation and may also benefit cognitive development. Objective: We tested whether zinc supplementation improves intestinal integrity, growth, and cognitive function in stunted children predicted to have EED. Methodology: This case–control prospective interventional study was conducted on 120 Egyptian Stunted children aged 1-10 years who recruited from the Nutrition clinic, the National research center, and 100 age and gender-matched healthy children as controls. At the primary phase of the study, Full history taking, clinical examination, and anthropometric measurements were done. Standard deviation score (SDS) for all measurements were calculated. Serum markers as Zonulin, Endotoxin core antibody (EndoCab), highly sensitive C-reactive protein (hsCRP), alpha1-acid glycoprotein (AGP), Tumor necrosis factor (TNF), and fecal markers such as myeloperoxidase (MPO), neopterin (NEO), and alpha-1-anti-trypsin (AAT) (as predictors of EED) were measured. Cognitive development was assessed (Bayley or Wechsler scores). Oral zinc at a dosage of 20 mg/d was supplemented to all cases and followed up for 6 months, after which the 2ry phase of the study included the previous clinical, laboratory, and cognitive assessment. Results: Serum and fecal inflammatory markers were significantly higher in cases compared to controls. Zonulin (P < 0.01), (EndoCab) (P < 0.001) and (AGP) (P < 0.03) markedly decreased in cases at the end of 2ry phase. Also (MPO), (NEO), and (AAT) showed a significant decline in cases at the end of the study (P < 0.001 for all). A significant increase in mid-upper arm circumference (MUAC) (P < 0.01), weight for age z-score, and skinfold thicknesses (P< 0.05 for both) was detected at end of the study, while height was not significantly affected. Cases also showed significant improvement of cognitive function at phase 2 of the study. Conclusion: Intestinal inflammatory state related to EED showed marked recovery after zinc supplementation. As a result, anthropometric and cognitive parameters showed obvious improvement with zinc supplementation.Keywords: stunting, cognitive function, environmental enteric dysfunction, zinc
Procedia PDF Downloads 190734 Heat Transfer Dependent Vortex Shedding of Thermo-Viscous Shear-Thinning Fluids
Authors: Markus Rütten, Olaf Wünsch
Abstract:
Non-Newtonian fluid properties can change the flow behaviour significantly, its prediction is more difficult when thermal effects come into play. Hence, the focal point of this work is the wake flow behind a heated circular cylinder in the laminar vortex shedding regime for thermo-viscous shear thinning fluids. In the case of isothermal flows of Newtonian fluids the vortex shedding regime is characterised by a distinct Reynolds number and an associated Strouhal number. In the case of thermo-viscous shear thinning fluids the flow regime can significantly change in dependence of the temperature of the viscous wall of the cylinder. The Reynolds number alters locally and, consequentially, the Strouhal number globally. In the present CFD study the temperature dependence of the Reynolds and Strouhal number is investigated for the flow of a Carreau fluid around a heated cylinder. The temperature dependence of the fluid viscosity has been modelled by applying the standard Williams-Landel-Ferry (WLF) equation. In the present simulation campaign thermal boundary conditions have been varied over a wide range in order to derive a relation between dimensionless heat transfer, Reynolds and Strouhal number. Together with the shear thinning due to the high shear rates close to the cylinder wall this leads to a significant decrease of viscosity of three orders of magnitude in the nearfield of the cylinder and a reduction of two orders of magnitude in the wake field. Yet the shear thinning effect is able to change the flow topology: a complex K´arm´an vortex street occurs, also revealing distinct characteristic frequencies associated with the dominant and sub-dominant vortices. Heating up the cylinder wall leads to a delayed flow separation and narrower wake flow, giving lesser space for the sequence of counter-rotating vortices. This spatial limitation does not only reduce the amplitude of the oscillating wake flow it also shifts the dominant frequency to higher frequencies, furthermore it damps higher harmonics. Eventually the locally heated wake flow smears out. Eventually, the CFD simulation results of the systematically varied thermal flow parameter study have been used to describe a relation for the main characteristic order parameters.Keywords: heat transfer, thermo-viscous fluids, shear thinning, vortex shedding
Procedia PDF Downloads 297733 Chromium (VI) Removal from Aqueous Solutions by Ion Exchange Processing Using Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071 Resins: Batch Ion Exchange Modeling
Authors: Havva Tutar Kahraman, Erol Pehlivan
Abstract:
In recent years, environmental pollution by wastewater rises very critically. Effluents discharged from various industries cause this challenge. Different type of pollutants such as organic compounds, oxyanions, and heavy metal ions create this threat for human bodies and all other living things. However, heavy metals are considered one of the main pollutant groups of wastewater. Therefore, this case creates a great need to apply and enhance the water treatment technologies. Among adopted treatment technologies, adsorption process is one of the methods, which is gaining more and more attention because of its easy operations, the simplicity of design and versatility. Ion exchange process is one of the preferred methods for removal of heavy metal ions from aqueous solutions. It has found widespread application in water remediation technologies, during the past several decades. Therefore, the purpose of this study is to the removal of hexavalent chromium, Cr(VI), from aqueous solutions. Cr(VI) is considered as a well-known highly toxic metal which modifies the DNA transcription process and causes important chromosomic aberrations. The treatment and removal of this heavy metal have received great attention to maintaining its allowed legal standards. The purpose of the present paper is an attempt to investigate some aspects of the use of three anion exchange resins: Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071. Batch adsorption experiments were carried out to evaluate the adsorption capacity of these three commercial resins in the removal of Cr(VI) from aqueous solutions. The chromium solutions used in the experiments were synthetic solutions. The parameters that affect the adsorption, solution pH, adsorbent concentration, contact time, and initial Cr(VI) concentration, were performed at room temperature. High adsorption rates of metal ions for the three resins were reported at the onset, and then plateau values were gradually reached within 60 min. The optimum pH for Cr(VI) adsorption was found as 3.0 for these three resins. The adsorption decreases with the increase in pH for three anion exchangers. The suitability of Freundlich, Langmuir and Scatchard models were investigated for Cr(VI)-resin equilibrium. Results, obtained in this study, demonstrate excellent comparability between three anion exchange resins indicating that Eichrom 1-X4 is more effective and showing highest adsorption capacity for the removal of Cr(VI) ions. Investigated anion exchange resins in this study can be used for the efficient removal of chromium from water and wastewater.Keywords: adsorption, anion exchange resin, chromium, kinetics
Procedia PDF Downloads 260732 Implementing a Structured, yet Flexible Tool for Critical Information Handover
Authors: Racheli Magnezi, Inbal Gazit, Michal Rassin, Joseph Barr, Orna Tal
Abstract:
An effective process for transmitting patient critical information is essential for patient safety and for improving communication among healthcare staff. Previous studies have discussed handover tools such as SBAR (Situation, Background, Assessment, Recommendation) or SOFI (Short Observational Framework for Inspection). Yet, these formats lack flexibility, and require special training. In addition, nurses and physicians have different procedures for handing over information. The objectives of this study were to establish a universal, structured tool for handover, for both physicians and nurses, based on parameters that were defined as ‘important’ and ‘appropriate’ by the medical team, and to implement this tool in various hospital departments, with flexibility for each ward. A questionnaire, based on established procedures and on the literature, was developed to assess attitudes towards the most important information for effective handover between shifts (Cronbach's alpha 0.78). It was distributed to 150 senior physicians and nurses in 62 departments. Among senior medical staff, 12 physicians and 66 nurses responded to the questionnaire (52% response rate). Based on the responses, a handover form suitable for all hospital departments was designed and implemented. Important information for all staff included: Patient demographics (full name and age); Health information (diagnosis or patient complaint, changes in hemodynamic status, new medical treatment or equipment required); and Social Information (suspicion of violence, mental or behavioral changes, and guardianship). Additional information relevant to each unit included treatment provided, laboratory or imaging required, and change in scheduled surgery in surgical departments. ICU required information on background illnesses, Pediatrics required information on diet and food provided and Obstetrics required the number of days after cesarean section. Based on the model described, a flexible tool was developed that enables handover of both common and unique information. In addition, it includes general logistic information that must be transmitted to the next shift, such as planned disruptions in service or operations, staff training, etc. Development of a simple, clear, comprehensive, universal, yet flexible tool designed for all medical staff for transmitting critical information between shifts was challenging. Physicians and nurses found it useful and it was widely implemented. Ongoing research is needed to examine the efficiency of this tool, and whether the enthusiasm that accompanied its initial use is maintained.Keywords: handover, nurses, hospital, critical information
Procedia PDF Downloads 248731 A Model of the Universe without Expansion of Space
Authors: Jia-Chao Wang
Abstract:
A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction
Procedia PDF Downloads 134730 The Ballistics Case Study of the Enrica Lexie Incident
Authors: Diego Abbo
Abstract:
On February 15, 2012 off the Indian coast of Kerala, in position 091702N-0760180E by the oil tanker Enrica Lexie, flying the Italian flag, bursts of 5.56 x45 caliber shots were fired from assault rifles AR/70 Italian-made Beretta towards the Indian fisher boat St. Anthony. The shots that hit the St. Anthony fishing boat were six, of which two killed the Indian fishermen Ajesh Pink and Valentine Jelestine. From the analysis concerning the kinematic engagement of the two ships and from the autopsy and ballistic results of the Indian judicial authorities it is possible to reconstruct the trajectories of the six aforementioned shots. This essay reconstructs the trajectories of the six shots that cannot be of direct shooting but have undergone a rebound on the water. The investigation carried out scientifically demonstrates the rebound of the blows on the water, the gyrostatic deviation due to the rebound and the tumbling effect always due to the rebound as regards intermediate ballistics. In consideration of the four shots that directly impacted the fishing vessel, the current examination proves, with scientific value, that the trajectories could not be downwards but upwards. Also, the trajectory of two shots that hit to death the two fishermen could not be downwards but only upwards. In fact, this paper demonstrates, with scientific value: The loss of speed of the projectiles due to the rebound on the water; The tumbling effect in the ballistic medium within the two victims; The permanent cavities subject to the injury ballistics and the related ballistic trauma that prevented homeostasis causing bleeding in one case; The thermo-hardening deformation of the bullet found in Valentine Jelestine's skull; The upward and non-downward trajectories. The paper constitutes a tool in forensic ballistics in that it manages to reconstruct, from the final spot of the projectiles fired, all phases of ballistics like the internal one of the weapons that fired, the intermediate one, the terminal one and the penetrative structural one. In general terms the ballistics reconstruction is based on measurable parameters whose entity is contained with certainty within a lower and upper limit. Therefore, quantities that refer to angles, speed, impact energy and firing position of the shooter can be identified within the aforementioned limits. Finally, the investigation into the internal bullet track, obtained from any autopsy examination, offers a significant “lesson learned” but overall a starting point to contain or mitigate bleeding as a rescue from future gunshot wounds.Keywords: impact physics, intermediate ballistics, terminal ballistics, tumbling effect
Procedia PDF Downloads 178729 Evaluation of the Weight-Based and Fat-Based Indices in Relation to Basal Metabolic Rate-to-Weight Ratio
Authors: Orkide Donma, Mustafa M. Donma
Abstract:
Basal metabolic rate is questioned as a risk factor for weight gain. The relations between basal metabolic rate and body composition have not been cleared yet. The impact of fat mass on basal metabolic rate is also uncertain. Within this context, indices based upon total body mass as well as total body fat mass are available. In this study, the aim is to investigate the potential clinical utility of these indices in the adult population. 287 individuals, aged from 18 to 79 years, were included into the scope of the study. Based upon body mass index values, 10 underweight, 88 normal, 88 overweight, 81 obese, and 20 morbid obese individuals participated. Anthropometric measurements including height (m), and weight (kg) were performed. Body mass index, diagnostic obesity notation model assessment index I, diagnostic obesity notation model assessment index II, basal metabolic rate-to-weight ratio were calculated. Total body fat mass (kg), fat percent (%), basal metabolic rate, metabolic age, visceral adiposity, fat mass of upper as well as lower extremities and trunk, obesity degree were measured by TANITA body composition monitor using bioelectrical impedance analysis technology. Statistical evaluations were performed by statistical package (SPSS) for Windows Version 16.0. Scatterplots of individual measurements for the parameters concerning correlations were drawn. Linear regression lines were displayed. The statistical significance degree was accepted as p < 0.05. The strong correlations between body mass index and diagnostic obesity notation model assessment index I as well as diagnostic obesity notation model assessment index II were obtained (p < 0.001). A much stronger correlation was detected between basal metabolic rate and diagnostic obesity notation model assessment index I in comparison with that calculated for basal metabolic rate and body mass index (p < 0.001). Upon consideration of the associations between basal metabolic rate-to-weight ratio and these three indices, the best association was observed between basal metabolic rate-to-weight and diagnostic obesity notation model assessment index II. In a similar manner, this index was highly correlated with fat percent (p < 0.001). Being independent of the indices, a strong correlation was found between fat percent and basal metabolic rate-to-weight ratio (p < 0.001). Visceral adiposity was much strongly correlated with metabolic age when compared to that with chronological age (p < 0.001). In conclusion, all three indices were associated with metabolic age, but not with chronological age. Diagnostic obesity notation model assessment index II values were highly correlated with body mass index values throughout all ranges starting with underweight going towards morbid obesity. This index is the best in terms of its association with basal metabolic rate-to-weight ratio, which can be interpreted as basal metabolic rate unit.Keywords: basal metabolic rate, body mass index, children, diagnostic obesity notation model assessment index, obesity
Procedia PDF Downloads 150728 Hydrogen Induced Fatigue Crack Growth in Pipeline Steel API 5L X65: A Combined Experimental and Modelling Approach
Authors: H. M. Ferreira, H. Cockings, D. F. Gordon
Abstract:
Climate change is driving a transition in the energy sector, with low-carbon energy sources such as hydrogen (H2) emerging as an alternative to fossil fuels. However, the successful implementation of a hydrogen economy requires an expansion of hydrogen production, transportation and storage capacity. The costs associated with this transition are high but can be partly mitigated by adapting the current oil and natural gas networks, such as pipeline, an important component of the hydrogen infrastructure, to transport pure or blended hydrogen. Steel pipelines are designed to withstand fatigue, one of the most common causes of pipeline failure. However, it is well established that some materials, such as steel, can fail prematurely in service when exposed to hydrogen-rich environments. Therefore, it is imperative to evaluate how defects (e.g. inclusions, dents, and pre-existing cracks) will interact with hydrogen under cyclic loading and, ultimately, to what extent hydrogen induced failure will limit the service conditions of steel pipelines. This presentation will explore how the exposure of API 5L X65 to a hydrogen-rich environment and cyclic loads will influence its susceptibility to hydrogen induced failure. That evaluation will be performed by a combination of several techniques such as hydrogen permeation testing (ISO 17081:2014), fatigue crack growth (FCG) testing (ISO 12108:2018 and AFGROW modelling), combined with microstructural and fractographic analysis. The development of a FCG test setup coupled with an electrochemical cell will be discussed, along with the advantages and challenges of measuring crack growth rates in electrolytic hydrogen environments. A detailed assessment of several electrolytic charging conditions will also be presented, using hydrogen permeation testing as a method to correlate the different charging settings to equivalent hydrogen concentrations and effective diffusivity coefficients, not only on the base material but also on the heat affected zone and weld of the pipelines. The experimental work is being complemented with AFGROW, a useful FCG modelling software that has helped inform testing parameters and which will also be developed to ultimately help industry experts perform structural integrity analysis and remnant life characterisation of pipeline steels under representative conditions. The results from this research will allow to conclude if there is an acceleration of the crack growth rate of API 5L X65 under the influence of a hydrogen-rich environment, an important aspect that needs to be rectified instandards and codes of practice on pipeline integrity evaluation and maintenance.Keywords: AFGROW, electrolytic hydrogen charging, fatigue crack growth, hydrogen, pipeline, steel
Procedia PDF Downloads 105727 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength
Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong
Abstract:
This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification
Procedia PDF Downloads 239726 Quantification and Detection of Non-Sewer Water Infiltration and Inflow in Urban Sewer Systems
Authors: M. Beheshti, S. Saegrov, T. M. Muthanna
Abstract:
Separated sewer systems are designed to transfer the wastewater from houses and industrial sections to wastewater treatment plants. Unwanted water in the sewer systems is a well-known problem, i.e. storm-water inflow is around 50% of the foul sewer, and groundwater infiltration to the sewer system can exceed 50% of total wastewater volume in deteriorated networks. Infiltration and inflow of non-sewer water (I/I) into sewer systems is unfavorable in separated sewer systems and can trigger overloading the system and reducing the efficiency of wastewater treatment plants. Moreover, I/I has negative economic, environmental, and social impacts on urban areas. Therefore, for having sustainable management of urban sewer systems, I/I of unwanted water into the urban sewer systems should be considered carefully and maintenance and rehabilitation plan should be implemented on these water infrastructural assets. This study presents a methodology to identify and quantify the level of I/I into the sewer system. Amount of I/I is evaluated by accurate flow measurement in separated sewer systems for specified isolated catchments in Trondheim city (Norway). Advanced information about the characteristics of I/I is gained by CCTV inspection of sewer pipelines with high I/I contribution. Achieving enhanced knowledge about the detection and localization of non-sewer water in foul sewer system during the wet and dry weather conditions will enable the possibility for finding the problem of sewer system and prioritizing them and taking decisions for rehabilitation and renewal planning in the long-term. Furthermore, preventive measures and optimization of sewer systems functionality and efficiency can be executed by maintenance of sewer system. In this way, the exploitation of sewer system can be improved by maintenance and rehabilitation of existing pipelines in a sustainable way by more practical cost-effective and environmental friendly way. This study is conducted on specified catchments with different properties in Trondheim city. Risvollan catchment is one of these catchments with a measuring station to investigate hydrological parameters through the year, which also has a good database. For assessing the infiltration in a separated sewer system, applying the flow rate measurement method can be utilized in obtaining a general view of the network condition from infiltration point of view. This study discusses commonly used and advanced methods of localizing and quantifying I/I in sewer systems. A combination of these methods give sewer operators the possibility to compare different techniques and obtain reliable and accurate I/I data which is vital for long-term rehabilitation plans.Keywords: flow rate measurement, infiltration and inflow (I/I), non-sewer water, separated sewer systems, sustainable management
Procedia PDF Downloads 333725 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms
Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga
Abstract:
Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.Keywords: anomaly detection, clustering, pattern recognition, web sessions
Procedia PDF Downloads 288724 Velma-ARC’s Rehabilitation of Repentant Cybercriminals in Nigeria
Authors: Umukoro Omonigho Simon, Ashaolu David ‘Diya, Aroyewun-Olaleye Temitope Folashade
Abstract:
The VELMA Action to Reduce Cybercrime (ARC) is an initiative, the first of its kind in Nigeria, designed to identify, rehabilitate and empower repentant cybercrime offenders popularly known as ‘yahoo boys’ in Nigerian parlance. Velma ARC provides social inclusion boot camps with the goal of rehabilitating cybercriminals via psychotherapeutic interventions, improving their IT skills, and empowering them to make constructive contributions to society. This report highlights the psychological interventions provided for participants of the maiden edition of the Velma ARC boot camp and presents the outcomes of these interventions. The boot camp was set up in a hotel premises which was booked solely for the 1 month event. The participants were selected and invited via the Velma online recruitment portal based on an objective double-blind selection process from a pool of potential participants who signified interest via the registration portal. The participants were first taken through psychological profiling (personality, symptomology and psychopathology) before the individual and group sessions began. They were profiled using the Minnesota Multiphasic Personality Inventory -2- Restructured Form (MMPI-2-RF), the latest version of its series. Individual psychotherapy sessions were conducted for all participants based on what was interpreted on their profiles. Focus group discussion was held later to discuss a movie titled ‘catch me if you can’ directed by Steven Spielberg, featuring Leonardo De Caprio and Tom Hanks. The movie was based on the true life story of Frank Abagnale, who was a notorious scammer and con artist in his youthful years. Emergent themes from the movie were discussed as psycho-educative parameters for the participants. The overall evaluation of outcomes from the VELMA ARC rehabilitation boot camp stemmed from a disaggregated assessment of observed changes which are summarized in the final report of the clinical psychologist and was detailed enough to infer genuine repentance and positive change in attitude towards cybercrime among the participants. Follow up services were incorporated to validate initial observations. This gives credence to the potency of the psycho-educative intervention provided during the Velma ARC boot camp. It was recommended that support and collaborations from the government and other agencies/individuals would assist the VELMA foundation in expanding the scope and quality of the Velma ARC initiative as an additional requirement for cybercrime offenders following incarceration.Keywords: Velma-ARC, cybercrime offenders, rehabilitation, Nigeria
Procedia PDF Downloads 154723 Metagenomic Assessment of the Effects of Genetically Modified Crops on Microbial Ecology and Physicochemical Properties of Soil
Authors: Falana Yetunde Olaitan, Ijah U. J. J, Solebo Shakirat O.
Abstract:
Genetically modified crops are already phenomenally successful and are grown worldwide in more than eighteen countries on more than 67 million hectares. Nigeria, in October 2018, approved Bacillus thuringiensis (Bt) cotton and maize; therefore, the need to carry out environmental risk assessment studies. A total of 15 4L octagonal ceramic pots were filled with 4kg of soil and placed on the bench in 2 rows of 10 pots each and the 3rd row of 5 pots, 1st-row pots were used to plant GM cotton seeds, while the 2nd-row pots were used for non-GM cotton seeds and the 3rd row of 5 pots served as control, all in the screen house. Soil samples for metagenomic DNA extraction were collected at random and at the monthly interval after planting at a distance of 2mm from the plant’s root and at a depth of 10cm using a sterile spatula. Soil samples for physicochemical analysis were collected before planting and after harvesting the GM and non-GM crops as well as from the control soil. The DNA was extracted, quantified and sequenced; Sample 1A (DNA from GM cotton Soil at 1st interval) gave the lowest sequence read with 0.853M while sample 2B (DNA from GM cotton Soil at 2nd interval) gave the highest with 5.785M, others gave between 1.8M and 4.7M. The samples treatment were grouped into four, Group 1 (GM cotton soil from 1 to 3 intervals) had between 800,000 and 5,700,000 strains of microbes (SOM), Group 2 (non GM cotton soil from 1 to 3 intervals) had between 1,400,600 and 4,200,000 SOM, Group 3 (control soil) had between 900,000 and 3,600,000 SOM and Group 4 (initial soil) had between 3,700,000 and 4,000,000 SOM. The microbes observed were predominantly bacteria (including archaea), fungi, dark matter alongside protists and phages. The predominant bacterial groups were the Terrabacteria (Bacillus funiculus, Bacillus sp.), the Proteobacteria (Microvirga massiliensis, sphingomonas sp.) and the Archaea (Nitrososphaera sp.), while the fungi were Aspergillus fischeri and Fusarium falciforme. The comparative analysis between groups was done using JACCARD PERMANOVA beta diversity analysis at P-value not more than 0.76 and there was no significant pair found. The pH for initial, GM cotton, non-GM cotton and control soil were 6.28, 6.26, 7.25, 8.26 and the percentage moisture was 0.63, 0.78, 0.89 and 0.82, respectively, while the percentage Nitrogen was observed to be 17.79, 1.14, 1.10 and 0.56 respectively. Other parameters include, varying concentrations of Potassium (0.46, 1,284.47, 1,785.48, 1,252.83 mg/kg) and Phosphorus (18.76, 17.76, 16.87, 15.23 mg/kg) were recorded for the four treatments respectively. The soil consisted mainly of silt (32.09 to 34.66%) and clay (58.89 to 60.23%), reflecting the soil texture as silty – clay. The results were then tested with ANOVA at less than 0.05 P-value and no pair was found to be significant as well. The results suggest that the GM crops have no significant effect on microbial ecology and physicochemical properties of the soil and, in turn, no direct or indirect effects on human health.Keywords: genetically modified crop, microbial ecology, physicochemical properties, metagenomics, DNA, soil
Procedia PDF Downloads 145722 Calcium Release- Activated Calcium Channels as a Target in Treatment of Allergic Asthma
Authors: Martina Šutovská, Marta Jošková, Ivana Kazimierová, Lenka Pappová, Maroš Adamkov, Soňa Fraňová
Abstract:
Bronchial asthma is characterized by increased bronchoconstrictor responses to provoking agonists, airway inflammation and remodeling. All these processes involve Ca2+ influx through Ca2+-release-activated Ca2+ channels (CRAC) that are widely expressed in immune, respiratory epithelium and airway smooth muscle (ASM) cells. Our previous study pointed on possible therapeutic potency of CRAC blockers using experimental guinea pigs asthma model. Presented work analyzed complex anti-asthmatic effect of long-term administered CRAC blocker, including impact on allergic inflammation, airways hyperreactivity, and remodeling and mucociliary clearance. Ovalbumin-induced allergic inflammation of the airways according to Franova et al. was followed by 14 days lasted administration of CRAC blocker (3-fluoropyridine-4-carboxylic acid, FPCA) in the dose 1.5 mg/kg bw. For comparative purposes salbutamol, budesonide and saline were applied to control groups. The anti-inflammatory effect of FPCA was estimated by serum and bronchoalveolar lavage fluid (BALF) changes in IL-4, IL-5, IL-13 and TNF-α analyzed by Bio-Plex® assay as well as immunohistochemical staining focused on assessment of tryptase and c-Fos positivity in pulmonary samples. The in vivo airway hyperreactivity was evaluated by Pennock et al. and by organ tissue bath methods in vitro. The immunohistochemical changes in ASM actin and collagen III layer as well as mucin secretion evaluated anti-remodeling effect of FPCA. The measurement of ciliary beat frequency (CBF) in vitro using LabVIEW™ Software determined impact on mucociliary clearance. Long-term administration of FPCA to sensitized animals resulted in: i. Significant decrease in cytokine levels, tryptase and c-Fos positivity similar to budesonide effect; ii.Meaningful decrease in basal and bronchoconstrictors-induced in vivo and in vitro airway hyperreactivity comparable to salbutamol; iii. Significant inhibition of airway remodeling parameters; iv. Insignificant changes in CBF. All these findings confirmed complex anti-asthmatic effect of CRAC channels blocker and evidenced these structures as the rational target in the treatment of allergic bronchial asthma.Keywords: allergic asthma, CRAC channels, cytokines, respiratory epithelium
Procedia PDF Downloads 521721 Pareto Optimal Material Allocation Mechanism
Authors: Peter Egri, Tamas Kis
Abstract:
Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling
Procedia PDF Downloads 333720 Early Help Family Group Conferences: An Analysis of Family Plans
Authors: Kate Parkinson
Abstract:
A Family Group Conference (FGC) is a family-led decision-making process through which a family/kinship group, rather than the professionals involved, is asked to develop a plan for the care or the protection of children in the family. In England and Wales, FGCs are used in 76% of local authorities and in recent years, have tended to be used in cases where the local authority are considering the court process to remove children from their immediate family, to explore kinship alternatives to local authority care. Some local authorities offer the service much earlier, when families first come to the attention of children's social care, in line with research that suggests the earlier an FGC is held, the more likely they are to be successful. Family plans that result from FGCs are different from professional plans in that they are unique to a family and, as a result, reflect the diversity of families. Despite the fact that FGCs are arguable the most researched area of social work globally, there is a dearth of research that examines the nature of family plans and their substance. This paper presents the findings of a documentary analysis of 42 Early Help FGC plans from local authorities in England, with the aim of exploring the level and type of support that family members offer at a FGC. A thematic analysis identified 5 broad areas of support: Practical Support, Building Relationships, Child-care Support, Emotional Support and Social Support. In the majority of cases, family members did not want or ask for any formal support from the local authority or other agencies. Rather, the families came together to agree a plan of support, which was within the parameters of the resources that they as a family could provide. Perhaps then the role of the Early Help professional should be one of a facilitating and enabling role, to support families to develop plans that address their own specific difficulties, rather than the current default option, which is to either close the case because the family do not meet service thresholds or refer to formal support if they do, which may offer very specific support, have rigid referral criteria, long waiting lists and may not reflect the diverse and unique nature of families. FGCs are argued to be culturally appropriate social work practices in that they are appropriate for families from a range of cultural backgrounds and can be adapted to meet particular cultural needs. Furthermore, research on the efficacy of FGCs at an Early Help Level has demonstrated that Early Help FGCs have the potential to address difficulties in family life and prevent the need for formal support services, which are potentially stigmatising and do not reflect the uniqueness and diversity of families. The paper concludes with a recommendation for the use of FGCs across Early Help Services in England and Wales.Keywords: family group conferences, family led decision making, early help, prevention
Procedia PDF Downloads 92719 Effect of Different Parameters of Converging-Diverging Vortex Finders on Cyclone Separator Performance
Abstract:
The present study is done to explore design modifications of the vortex finder, as it has a significant effect on the cyclone separator performance. It is evident that modifications of the vortex finder improve the performance of the cyclone separator significantly. The study conducted strives to improve the overall performance of cyclone separators by utilizing a converging-diverging (CD) vortex finder instead of the traditional uniform diameter vortex finders. The velocity and pressure fields inside a Stairmand cyclone separator with body diameter 0.29m and vortex finder diameter 0.1305m are calculated. The commercial software, Ansys Fluent v14.0 is used to simulate the flow field in a uniform diameter cyclone and six cyclones modified with CD vortex finders. Reynolds stress model is used to simulate the effects of turbulence on the fluid and particulate phases, discrete phase model is used to calculate the particle trajectories. The performance of the modified vortex finders is compared with the traditional vortex finder. The effects of the lengths of the converging and diverging sections, the throat diameter and the end diameters of the convergent divergent section are also studied to achieve enhanced performance. The pressure and velocity fields inside the vortex finder are presented by means of contour plots and velocity vectors and changes in the flow pattern due to variation of the geometrical variables are also analysed. Results indicate that a convergent-divergent vortex finder is capable of decreasing the pressure drop than that achieved through a uniform diameter vortex finder. It is also observed that the end diameters of the CD vortex finder, the throat diameter and the length of the diverging part of the vortex finder have a significant impact on the cyclone separator performance. Increase in the lower diameter of the vortex finder by 66% results in 11.5% decrease in the dimensionless pressure drop (Euler number) with 5.8% decrease in separation efficiency. Whereas 50% decrease in the throat diameter gives 5.9% increase in the Euler number with 10.2% increase in the separation efficiency and increasing the length of the diverging part gives 10.28% increase in the Euler number with 5.74% increase in the separation efficiency. Increasing the upper diameter of the CD vortex finder is seen to produce an adverse effect on the performance as it increases the pressure drop significantly and decreases the separation efficiency. Increase in length of the converging is not seen to affect the performance significantly. From the present study, it is concluded that convergent-divergent vortex finders can be used in place of uniform diameter vortex finders to achieve a better cyclone separator performance.Keywords: convergent-divergent vortex finder, cyclone separator, discrete phase modeling, Reynolds stress model
Procedia PDF Downloads 173718 Biopolymers: A Solution for Replacing Polyethylene in Food Packaging
Authors: Sonia Amariei, Ionut Avramia, Florin Ursachi, Ancuta Chetrariu, Ancuta Petraru
Abstract:
The food industry is one of the major generators of plastic waste derived from conventional synthetic petroleum-based polymers, which are non-biodegradable, used especially for packaging. These packaging materials, after the food is consumed, accumulate serious environmental concerns due to the materials but also to the organic residues that adhere to them. It is the concern of specialists, researchers to eliminate problems related to conventional materials that are not biodegradable or unnecessary plastic and replace them with biodegradable and edible materials, supporting the common effort to protect the environment. Even though environmental and health concerns will cause more consumers to switch to a plant-based diet, most people will continue to add more meat to their diet. The paper presents the possibility of replacing the polyethylene packaging from the surface of the trays for meat preparations with biodegradable packaging obtained from biopolymers. During the storage of meat products may occur deterioration by lipids oxidation and microbial spoilage, as well as the modification of the organoleptic characteristics. For this reason, different compositions of polymer mixtures and film conditions for obtaining must be studied to choose the best packaging material to achieve food safety. The compositions proposed for packaging are obtained from alginate, agar, starch, and glycerol as plasticizers. The tensile strength, elasticity, modulus of elasticity, thickness, density, microscopic images of the samples, roughness, opacity, humidity, water activity, the amount of water transferred as well as the speed of water transfer through these packaging materials were analyzed. A number of 28 samples with various compositions were analyzed, and the results showed that the sample with the highest values for hardness, density, and opacity, as well as the smallest water vapor permeability, of 1.2903E-4 ± 4.79E-6, has the ratio of components as alginate: agar: glycerol (3:1.25:0.75). The water activity of the analyzed films varied between 0.2886 and 0.3428 (aw< 0.6), demonstrating that all the compositions ensure the preservation of the products in the absence of microorganisms. All the determined parameters allow the appreciation of the quality of the packaging films in terms of mechanical resistance, its protection against the influence of light, the transfer of water through the packaging. Acknowledgments: This work was supported by a grant of the Ministry of Research, Innovation, and Digitization, CNCS/CCCDI – UEFISCDI, project number PN-III-P2-2.1-PED-2019-3863, within PNCDI III.Keywords: meat products, alginate, agar, starch, glycerol
Procedia PDF Downloads 169717 Multi-Objective Optimization of the Thermal-Hydraulic Behavior for a Sodium Fast Reactor with a Gas Power Conversion System and a Loss of off-Site Power Simulation
Authors: Avent Grange, Frederic Bertrand, Jean-Baptiste Droin, Amandine Marrel, Jean-Henry Ferrasse, Olivier Boutin
Abstract:
CEA and its industrial partners are designing a gas Power Conversion System (PCS) based on a Brayton cycle for the ASTRID Sodium-cooled Fast Reactor. Investigations of control and regulation requirements to operate this PCS during operating, incidental and accidental transients are necessary to adapt core heat removal. To this aim, we developed a methodology to optimize the thermal-hydraulic behavior of the reactor during normal operations, incidents and accidents. This methodology consists of a multi-objective optimization for a specific sequence, whose aim is to increase component lifetime by reducing simultaneously several thermal stresses and to bring the reactor into a stable state. Furthermore, the multi-objective optimization complies with safety and operating constraints. Operating, incidental and accidental sequences use specific regulations to control the thermal-hydraulic reactor behavior, each of them is defined by a setpoint, a controller and an actuator. In the multi-objective problem, the parameters used to solve the optimization are the setpoints and the settings of the controllers associated with the regulations included in the sequence. In this way, the methodology allows designers to define an optimized and specific control strategy of the plant for the studied sequence and hence to adapt PCS piloting at its best. The multi-objective optimization is performed by evolutionary algorithms coupled to surrogate models built on variables computed by the thermal-hydraulic system code, CATHARE2. The methodology is applied to a loss of off-site power sequence. Three variables are controlled: the sodium outlet temperature of the sodium-gas heat exchanger, turbomachine rotational speed and water flow through the heat sink. These regulations are chosen in order to minimize thermal stresses on the gas-gas heat exchanger, on the sodium-gas heat exchanger and on the vessel. The main results of this work are optimal setpoints for the three regulations. Moreover, Proportional-Integral-Derivative (PID) control setting is considered and efficient actuators used in controls are chosen through sensitivity analysis results. Finally, the optimized regulation system and the reactor control procedure, provided by the optimization process, are verified through a direct CATHARE2 calculation.Keywords: gas power conversion system, loss of off-site power, multi-objective optimization, regulation, sodium fast reactor, surrogate model
Procedia PDF Downloads 309716 Analysis of Metamaterial Permeability on the Performance of Loosely Coupled Coils
Authors: Icaro V. Soares, Guilherme L. F. Brandao, Ursula D. C. Resende, Glaucio L. Siqueira
Abstract:
Electrical energy can be wirelessly transmitted through resonant coupled coils that operate in the near-field region. Once in this region, the field has evanescent character, the efficiency of Resonant Wireless Power Transfer (RWPT) systems decreases proportionally with the inverse cube of distance between the transmitter and receiver coils. The commercially available RWPT systems are restricted to short and mid-range applications in which the distance between coils is lesser or equal to the coil size. An alternative to overcome this limitation is applying metamaterial structures to enhance the coupling between coils, thus reducing the field decay along the distance between them. Metamaterials can be conceived as composite materials with periodic or non-periodic structure whose unconventional electromagnetic behaviour is due to its unit cell disposition and chemical composition. This new kind of material has been used in frequency selective surfaces, invisibility cloaks, leaky-wave antennas, among other applications. However, for RWPT it is mainly applied as superlenses which are lenses that can overcome the optical limitation and are made of left-handed media, that is, a medium with negative magnetic permeability and electric permittivity. As RWPT systems usually operate at wavelengths of hundreds of meters, the metamaterial unit cell size is much smaller than the wavelength. In this case, electric and magnetic field are decoupled, therefore the double negative condition for superlenses are not required and the negative magnetic permeability is enough to produce an artificial magnetic medium. In this work, the influence of the magnetic permeability of a metamaterial slab inserted between two loosely coupled coils is studied in order to find the condition that leads to the maximum transmission efficiency. The metamaterial used is formed by a subwavelength unit cell that consist of a capacitor-loaded split ring with an inner spiral that is designed and optimized using the software Computer Simulation Technology. The unit cell permeability is experimentally characterized by the ratio of the transmission parameters between coils measured with and without the presence of the metamaterial slab. Early measurements results show that the transmission coefficient at the resonant frequency after the inclusion of the metamaterial is about three times higher than with just the two coils, which confirms the enhancement that this structure brings to RWPT systems.Keywords: electromagnetic lens, loosely coupled coils, magnetic permeability, metamaterials, resonant wireless power transfer, subwavelength unit cells
Procedia PDF Downloads 146715 Sludge Marvel (Densification): The Ultimate Solution For Doing More With Less Effort!
Authors: Raj Chavan
Abstract:
At present, the United States is home to more than 14,000 Water Resource Recovery Facilities (WRRFs), of which approximately 35% have implemented nutrient limits of some kind. These WRRFs contribute 10 to 15% of the total nutrient burden to surface rivers in the United States and account for approximately 1% of total power demand and 2% of total greenhouse gas emissions (GHG). There are several factors that have influenced the development of densification technologies in the direction of more compact and energy-efficient nutrient removal processes. Prior to surface water discharge, existing facilities that necessitate capacity expansion or biomass densification for greater treatability within the same footprint are being subjected to stricter nutrient removal requirements. Densification of activated sludge as a method for nutrient removal and process intensification at WRRFs has garnered considerable attention in recent times. The biological processes take place within the aerobic sediment granules, which form the basis of the technology. The possibility of generating granular sludge through continuous (or conventional) activated sludge processes (CAS) or densification of biomass through the transfer of activated sludge flocs to a denser biomass aggregate as an exceptionally efficient intensification technique has generated considerable interest. This presentation aims to furnish attendees with a foundational comprehension of densification through the illustration of practical concerns and insights. The subsequent subjects will be deliberated upon. What are some potential techniques for producing and preserving densified granules? What processes are responsible for the densification of biological flocs? How do physical selectors contribute to the process of biological flocs becoming denser? What viable strategies exist for the management of densified biological flocs, and which design parameters of physical selection influence the retention of densified biological flocs? determining operational solutions for floc and granule customization in order to meet capacity and performance objectives? The answers to these pivotal questions will be derived from existing full-scale treatment facilities, bench-scale and pilot-scale investigations, and existing literature data. By the conclusion of the presentation, the audience will possess a fundamental comprehension of the densification concept and its significance in attaining effective effluent treatment. Additionally, case studies pertaining to the design and operation of densification procedures will be incorporated into the presentation.Keywords: densification, intensification, nutrient removal, granular sludge
Procedia PDF Downloads 74714 Genetic Structure Analysis through Pedigree Information in a Closed Herd of the New Zealand White Rabbits
Authors: M. Sakthivel, A. Devaki, D. Balasubramanyam, P. Kumarasamy, A. Raja, R. Anilkumar, H. Gopi
Abstract:
The New Zealand White breed of rabbit is one of the most commonly used, well adapted exotic breeds in India. Earlier studies were limited only to analyze the environmental factors affecting the growth and reproductive performance. In the present study, the population of the New Zealand White rabbits in a closed herd was evaluated for its genetic structure. Data on pedigree information (n=2508) for 18 years (1995-2012) were utilized for the study. Pedigree analysis and the estimates of population genetic parameters based on gene origin probabilities were performed using the software program ENDOG (version 4.8). The analysis revealed that the mean values of generation interval, coefficients of inbreeding and equivalent inbreeding were 1.489 years, 13.233 percent and 17.585 percent, respectively. The proportion of population inbred was 100 percent. The estimated mean values of average relatedness and the individual increase in inbreeding were 22.727 and 3.004 percent, respectively. The percent increase in inbreeding over generations was 1.94, 3.06 and 3.98 estimated through maximum generations, equivalent generations, and complete generations, respectively. The number of ancestors contributing the most of 50% genes (fₐ₅₀) to the gene pool of reference population was 4 which might have led to the reduction in genetic variability and increased amount of inbreeding. The extent of genetic bottleneck assessed by calculating the effective number of founders (fₑ) and the effective number of ancestors (fₐ), as expressed by the fₑ/fₐ ratio was 1.1 which is indicative of the absence of stringent bottlenecks. Up to 5th generation, 71.29 percent pedigree was complete reflecting the well-maintained pedigree records. The maximum known generations were 15 with an average of 7.9 and the average equivalent generations traced were 5.6 indicating of a fairly good depth in pedigree. The realized effective population size was 14.93 which is very critical, and with the increasing trend of inbreeding, the situation has been assessed to be worse in future. The proportion of animals with the genetic conservation index (GCI) greater than 9 was 39.10 percent which can be used as a scale to use such animals with higher GCI to maintain balanced contribution from the founders. From the study, it was evident that the herd was completely inbred with very high inbreeding coefficient and the effective population size was critical. Recommendations were made to reduce the probability of deleterious effects of inbreeding and to improve the genetic variability in the herd. The present study can help in carrying out similar studies to meet the demand for animal protein in developing countries.Keywords: effective population size, genetic structure, pedigree analysis, rabbit genetics
Procedia PDF Downloads 293713 Simulation of Wet Scrubbers for Flue Gas Desulfurization
Authors: Anders Schou Simonsen, Kim Sorensen, Thomas Condra
Abstract:
Wet scrubbers are used for flue gas desulfurization by injecting water directly into the flue gas stream from a set of sprayers. The water droplets will flow freely inside the scrubber, and flow down along the scrubber walls as a thin wall film while reacting with the gas phase to remove SO₂. This complex multiphase phenomenon can be divided into three main contributions: the continuous gas phase, the liquid droplet phase, and the liquid wall film phase. This study proposes a complete model, where all three main contributions are taken into account and resolved using OpenFOAM for the continuous gas phase, and MATLAB for the liquid droplet and wall film phases. The 3D continuous gas phase is composed of five species: CO₂, H₂O, O₂, SO₂, and N₂, which are resolved along with momentum, energy, and turbulence. Source terms are present for four species, energy and momentum, which are affecting the steady-state solution. The liquid droplet phase experiences breakup, collisions, dynamics, internal chemistry, evaporation and condensation, species mass transfer, energy transfer and wall film interactions. Numerous sub-models have been implemented and coupled to realise the above-mentioned phenomena. The liquid wall film experiences impingement, acceleration, atomization, separation, internal chemistry, evaporation and condensation, species mass transfer, and energy transfer, which have all been resolved using numerous sub-models as well. The continuous gas phase has been coupled with the liquid phases using source terms by an approach, where the two software packages are couples using a link-structure. The complete CFD model has been verified using 16 experimental tests from an existing scrubber installation, where a gradient-based pattern search optimization algorithm has been used to tune numerous model parameters to match the experimental results. The CFD model needed to be fast for evaluation in order to apply this optimization routine, where approximately 1000 simulations were needed. The results show that the complex multiphase phenomena governing wet scrubbers can be resolved in a single model. The optimization routine was able to tune the model to accurately predict the performance of an existing installation. Furthermore, the study shows that a coupling between OpenFOAM and MATLAB is realizable, where the data and source term exchange increases the computational requirements by approximately 5%. This allows for exploiting the benefits of both software programs.Keywords: desulfurization, discrete phase, scrubber, wall film
Procedia PDF Downloads 265712 Antimicrobial Properties of SEBS Compounds with Copper Microparticles
Authors: Vanda Ferreira Ribeiro, Daiane Tomacheski, Douglas Naue Simões, Michele Pitto, Ruth Marlene Campomanes Santana
Abstract:
Indoor environments, such as car cabins and public transportation vehicles are places where users are subject to air quality. Microorganisms (bacteria, fungi, yeasts) enter these environments through windows, ventilation systems and may use the organic particles present as a growth substrate. In addition, atmospheric pollutants can act as potential carbon and nitrogen sources for some microorganisms. Compounds base SEBS copolymers, poly(styrene-b-(ethylene-co-butylene)-b-styrene, are a class of thermoplastic elastomers (TPEs), fully recyclable and largely used in automotive parts. Metals, such as cooper and silver, have biocidal activities and the production of the SEBS compounds by melting blending with these agents can be a good option for producing compounds for use in plastic parts of ventilation systems and automotive air-conditioning, in order to minimize the problems caused by growth of pathogenic microorganisms. In this sense, the aim of this work was to evaluate the effect of copper microparticles as antimicrobial agent in compositions based on SEBS/PP/oil/calcite. Copper microparticles were used in weight proportion of 0%, 1%, 2% and 4%. The compounds were prepared using a co-rotating double screw extruder (L/D ratio of 40/1 and 16 mm screw diameter). The processing parameters were 300 rpm of screw rotation rate, with a temperature profile between 150 to 190°C. SEBS based TPE compounds were injection molded. The compounds emission were characterized by gravimetric fogging test. Compounds were characterized by physical (density and staining by contact), mechanical (hardness and tension properties) and rheological properties (melt volume rate – MVR). Antibacterial properties were evaluated against Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli) strains. To avaluate the abilities toward the fungi have been chosen Aspergillus niger (A. niger), Candida albicans (C. albicans), Cladosporium cladosporioides (C. cladosporioides) and Penicillium chrysogenum (P. chrysogenum). The results of biological tests showed a reduction on bacteria in up to 88% in E.coli and up to 93% in S. aureus. The tests with fungi showed no conclusive results because the sample without copper also demonstrated inhibition of the development of these microorganisms. The copper addition did not cause significant variations in mechanical properties, in the MVR and the emission behavior of the compounds. The density increases with the increment of copper in compounds.Keywords: air conditioner, antimicrobial, cooper, SEBS
Procedia PDF Downloads 282