Search results for: Virtual Machine Software (VMware)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3582

Search results for: Virtual Machine Software (VMware)

42 Experimental Analysis of the Influence of Water Mass Flow Rate on the Performance of a CO2 Direct-Expansion Solar Assisted Heat Pump

Authors: Sabrina N. Rabelo, Tiago de F. Paulino, Willian M. Duarte, Samer Sawalha, Luiz Machado

Abstract:

Energy use is one of the main indicators for the economic and social development of a country, reflecting directly in the quality of life of the population. The expansion of energy use together with the depletion of fossil resources and the poor efficiency of energy systems have led many countries in recent years to invest in renewable energy sources. In this context, solar-assisted heat pump has become very important in energy industry, since it can transfer heat energy from the sun to water or another absorbing source. The direct-expansion solar assisted heat pump (DX-SAHP) water heater system operates by receiving solar energy incident in a solar collector, which serves as an evaporator in a refrigeration cycle, and the energy reject by the condenser is used for water heating. In this paper, a DX-SAHP using carbon dioxide as refrigerant (R744) was assembled, and the influence of the variation of the water mass flow rate in the system was analyzed. The parameters such as high pressure, water outlet temperature, gas cooler outlet temperature, evaporator temperature, and the coefficient of performance were studied. The mainly components used to assemble the heat pump were a reciprocating compressor, a gas cooler which is a countercurrent concentric tube heat exchanger, a needle-valve, and an evaporator that is a copper bare flat plate solar collector designed to capture direct and diffuse radiation. Routines were developed in the LabVIEW and CoolProp through MATLAB software’s, respectively, to collect data and calculate the thermodynamics properties. The range of coefficient of performance measured was from 3.2 to 5.34. It was noticed that, with the higher water mass flow rate, the water outlet temperature decreased, and consequently, the coefficient of performance of the system increases since the heat transfer in the gas cooler is higher. In addition, the high pressure of the system and the CO2 gas cooler outlet temperature decreased. The heat pump using carbon dioxide as a refrigerant, especially operating with solar radiation has been proven to be a renewable source in an efficient system for heating residential water compared to electrical heaters reaching temperatures between 40 °C and 80 °C.

Keywords: Water mass flow rate, R-744, heat pump, solar evaporator, water heater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1050
41 The Advancement of Smart Cushion Product and System Design Enhancing Public Health and Well-Being at Workplace

Authors: Dosun Shin, Assegid Kidane, Pavan Turaga

Abstract:

This research project brings together experts in multiple disciplines to bring product design, sensor design, algorithms, and health intervention studies to develop a product and system that helps reduce the amount of time sitting at the workplace. This paper illustrates ongoing improvements to prototypes the research team developed in initial research; including working prototypes with a software application, which were developed and demonstrated for users. Additional modifications were made to improve functionality, aesthetics, and ease of use, which will be discussed in this paper. Extending on the foundations created in the initial phase, our approach sought to further improve the product by conducting additional human factor research, studying deficiencies in competitive products, testing various materials/forms, developing working prototypes, and obtaining feedback from additional potential users. The solution consisted of an aesthetically pleasing seat cover cushion that easily attaches to common office chairs found in most workplaces, ensuring that a wide variety of people can use the product. The product discreetly contains sensors that track when the user sits on their chair, sending information to a phone app that triggers reminders for users to stand up and move around after sitting for a set amount of time. This paper also presents the analyzed typical office aesthetics and selected materials, colors, and forms that complimented the working environment. Comfort and ease of use remained a high priority as the design team sought to provide a product and system that integrated into the workplace. As the research team continues to test, improve, and implement this solution for the sedentary workplace, the team seeks to create a viable product that acts as an impetus for a more active workday and lifestyle, further decreasing the proliferation of chronic disease and health issues for sedentary working people. This paper illustrates in detail the processes of engineering, product design, methodology, and testing results.

Keywords: Anti-sedentary work behavior, new product development, sensor design, health intervention studies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 375
40 Rotary Machine Sealing Oscillation Frequencies and Phase Shift Analysis

Authors: Liliia N. Butymova, Vladimir Ya Modorskii

Abstract:

To ensure the gas transmittal GCU's efficient operation, leakages through the labyrinth packings (LP) should be minimized. Leakages can be minimized by decreasing the LP gap, which in turn depends on thermal processes and possible rotor vibrations and is designed to ensure absence of mechanical contact. Vibration mitigation allows to minimize the LP gap. It is advantageous to research influence of processes in the dynamic gas-structure system on LP vibrations. This paper considers influence of rotor vibrations on LP gas dynamics and influence of the latter on the rotor structure within the FSI unidirectional dynamical coupled problem. Dependences of nonstationary parameters of gas-dynamic process in LP on rotor vibrations under various gas speeds and pressures, shaft rotation speeds and vibration amplitudes, and working medium features were studied. The programmed multi-processor ANSYS CFX was chosen as a numerical computation tool. The problem was solved using PNRPU high-capacity computer complex. Deformed shaft vibrations are replaced with an unyielding profile that moves in the fixed annulus "up-and-down" according to set harmonic rule. This solves a nonstationary gas-dynamic problem and determines time dependence of total gas-dynamic force value influencing the shaft. Pressure increase from 0.1 to 10 MPa causes growth of gas-dynamic force oscillation amplitude and frequency. The phase shift angle between gas-dynamic force oscillations and those of shaft displacement decreases from 3π/4 to π/2. Damping constant has maximum value under 1 MPa pressure in the gap. Increase of shaft oscillation frequency from 50 to 150 Hz under P=10 MPa causes growth of gas-dynamic force oscillation amplitude. Damping constant has maximum value at 50 Hz equaling 1.012. Increase of shaft vibration amplitude from 20 to 80 µm under P=10 MPa causes the rise of gas-dynamic force amplitude up to 20 times. Damping constant increases from 0.092 to 0.251. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the minimum gas-dynamic force persistent oscillating amplitude under P=0.1 MPa being observed in methane, and maximum in the air. Frequency remains almost unchanged and the phase shift in the air changes from 3π/4 to π/2. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the maximum gas-dynamic force oscillating amplitude under P=10 MPa being observed in methane, and minimum in the air. Air demonstrates surging. Increase of leakage speed from 0 to 20 m/s through LP under P=0.1 MPa causes the gas-dynamic force oscillating amplitude to decrease by 3 orders and oscillation frequency and the phase shift to increase 2 times and stabilize. Increase of leakage speed from 0 to 20 m/s in LP under P=1 MPa causes gas-dynamic force oscillating amplitude to decrease by almost 4 orders. The phase shift angle increases from π/72 to π/2. Oscillations become persistent. Flow rate proved to influence greatly on pressure oscillations amplitude and a phase shift angle. Work medium influence depends on operation conditions. At pressure growth, vibrations are mostly affected in methane (of working substances list considered), and at pressure decrease, in the air at 25 ˚С.

Keywords: Aeroelasticity, labyrinth packings, oscillation phase shift, vibration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525
39 Effect of Different Contaminants on Mineral Insulating Oil Characteristics

Authors: H. M. Wilhelm, P. O. Fernandes, L. P. Dill, C. Steffens, K. G. Moscon, S. M. Peres, V. Bender, T. Marchesan, J. B. Ferreira Neto

Abstract:

Deterioration of insulating oil is a natural process that occurs during transformers operation. However, this process can be accelerated by some factors, such as oxygen, high temperatures, metals and, moisture, which rapidly reduce oil insulating capacity and favor transformer faults. Parts of building materials of a transformer can be degraded and yield soluble compounds and insoluble particles that shorten the equipment life. Physicochemical tests, dissolved gas analysis (including propane, propylene and, butane), volatile and furanic compounds determination, besides quantitative and morphological analyses of particulate are proposed in this study in order to correlate transformers building materials degradation with insulating oil characteristics. The present investigation involves tests of medium temperature overheating simulation by means of an electric resistance wrapped with the following materials immersed in mineral insulating oil: test I) copper, tin, lead and, paper (heated at 350-400 °C for 8 h); test II) only copper (at 250 °C for 11 h); and test III) only paper (at 250 °C for 8 h and at 350 °C for 8 h). A different experiment is the simulation of electric arc involving copper, using an electric welding machine at two distinct energy sets (low and high). Analysis results showed that dielectric loss was higher in the sample of test I, higher neutralization index and higher values of hydrogen and hydrocarbons, including propane and butane, were also observed. Test III oil presented higher particle count, in addition, ferrographic analysis revealed contamination with fibers and carbonized paper. However, these particles had little influence on the oil physicochemical parameters (dielectric loss and neutralization index) and on the gas production, which was very low. Test II oil showed high levels of methane, ethane, and propylene, indicating the effect of metal on oil degradation. CO2 and CO gases were formed in the highest concentration in test III, as expected. Regarding volatile compounds, in test I acetone, benzene and toluene were detected, which are oil oxidation products. Regarding test III, methanol was identified due to cellulose degradation, as expected. Electric arc simulation test showed the highest oil oxidation in presence of copper and at high temperature, since these samples had huge concentration of hydrogen, ethylene, and acetylene. Particle count was also very high, showing the highest release of copper in such conditions. When comparing high and low energy, the first presented more hydrogen, ethylene, and acetylene. This sample had more similar results to test I, pointing out that the generation of different particles can be the cause for faults such as electric arc. Ferrography showed more evident copper and exfoliation particles than in other samples. Therefore, in this study, by using different combined analytical techniques, it was possible to correlate insulating oil characteristics with possible contaminants, which can lead to transformers failure.

Keywords: Ferrography, gas analysis, insulating mineral oil, particle contamination, transformer failures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 398
38 Satellite Interferometric Investigations of Subsidence Events Associated with Groundwater Extraction in Sao Paulo, Brazil

Authors: B. Mendonça, D. Sandwell

Abstract:

The Metropolitan Region of Sao Paulo (MRSP) has suffered from serious water scarcity. Consequently, the most convenient solution has been building wells to extract groundwater from local aquifers. However, it requires constant vigilance to prevent over extraction and future events that can pose serious threat to the population, such as subsidence. Radar imaging techniques (InSAR) have allowed continuous investigation of such phenomena. The analysis of data in the present study consists of 23 SAR images dated from October 2007 to March 2011, obtained by the ALOS-1 spacecraft. Data processing was made with the software GMTSAR, by using the InSAR technique to create pairs of interferograms with ground displacement during different time spans. First results show a correlation between the location of 102 wells registered in 2009 and signals of ground displacement equal or lower than -90 millimeters (mm) in the region. The longest time span interferogram obtained dates from October 2007 to March 2010. As a result, from that interferogram, it was possible to detect the average velocity of displacement in millimeters per year (mm/y), and which areas strong signals have persisted in the MRSP. Four specific areas with signals of subsidence of 28 mm/y to 40 mm/y were chosen to investigate the phenomenon: Guarulhos (Sao Paulo International Airport), the Greater Sao Paulo, Itaquera and Sao Caetano do Sul. The coverage area of the signals was between 0.6 km and 1.65 km of length. All areas are located above a sedimentary type of aquifer. Itaquera and Sao Caetano do Sul showed signals varying from 28 mm/y to 32 mm/y. On the other hand, the places most likely to be suffering from stronger subsidence are the ones in the Greater Sao Paulo and Guarulhos, right beside the International Airport of Sao Paulo. The rate of displacement observed in both regions goes from 35 mm/y to 40 mm/y. Previous investigations of the water use at the International Airport highlight the risks of excessive water extraction that was being done through 9 deep wells. Therefore, it is affirmed that subsidence events are likely to occur and to cause serious damage in the area. This study could show a situation that has not been explored with proper importance in the city, given its social and economic consequences. Since the data were only available until 2011, the question that remains is if the situation still persists. It could be reaffirmed, however, a scenario of risk at the International Airport of Sao Paulo that needs further investigation.

Keywords: Ground subsidence, interferometric satellite aperture radar (InSAR), metropolitan region of Sao Paulo, water extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1349
37 Water Quality Trading with Equitable Total Maximum Daily Loads

Authors: S. Jamshidi, E. Feizi Ashtiani, M. Ardestani

Abstract:

Waste Load Allocation (WLA) strategies usually intend to find economic policies for water resource management. Water quality trading (WQT) is an approach that uses discharge permit market to reduce total environmental protection costs. This primarily requires assigning discharge limits known as total maximum daily loads (TMDLs). These are determined by monitoring organizations with respect to the receiving water quality and remediation capabilities. The purpose of this study is to compare two approaches of TMDL assignment for WQT policy in small catchment area of Haraz River, in north of Iran. At first, TMDLs are assigned uniformly for the whole point sources to keep the concentrations of BOD and dissolved oxygen (DO) at the standard level at checkpoint (terminus point). This was simply simulated and controlled by Qual2kw software. In the second scenario, TMDLs are assigned using multi objective particle swarm optimization (MOPSO) method in which the environmental violation at river basin and total treatment costs are minimized simultaneously. In both scenarios, the equity index and the WLA based on trading discharge permits (TDP) are calculated. The comparative results showed that using economically optimized TMDLs (2nd scenario) has slightly more cost savings rather than uniform TMDL approach (1st scenario). The former annually costs about 1 M$ while the latter is 1.15 M$. WQT can decrease these annual costs to 0.9 and 1.1 M$, respectively. In other word, these approaches may save 35 and 45% economically in comparison with command and control policy. It means that using multi objective decision support systems (DSS) may find more economical WLA, however its outcome is not necessarily significant in comparison with uniform TMDLs. This may be due to the similar impact factors of dischargers in small catchments. Conversely, using uniform TMDLs for WQT brings more equity that makes stakeholders not feel that much envious of difference between TMDL and WQT allocation. In addition, for this case, determination of TMDLs uniformly would be much easier for monitoring. Consequently, uniform TMDL for TDP market is recommended as a sustainable approach. However, economical TMDLs can be used for larger watersheds.

Keywords: Waste load allocation (WLA), Water quality trading (WQT), Total maximum daily loads (TMDLs), Haraz River, Multi objective particle swarm optimization (MOPSO), Equity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014
36 Comparative Study of Equivalent Linear and Non-Linear Ground Response Analysis for Rapar District of Kutch, India

Authors: Kulin Dave, Kapil Mohan

Abstract:

Earthquakes are considered to be the most destructive rapid-onset disasters human beings are exposed to. The amount of loss it brings in is sufficient to take careful considerations for designing of structures and facilities. Seismic Hazard Analysis is one such tool which can be used for earthquake resistant design. Ground Response Analysis is one of the most crucial and decisive steps for seismic hazard analysis. Rapar district of Kutch, Gujarat falls in Zone 5 of earthquake zone map of India and thus has high seismicity because of which it is selected for analysis. In total 8 bore-log data were studied at different locations in and around Rapar district. Different soil engineering properties were analyzed and relevant empirical correlations were used to calculate maximum shear modulus (Gmax) and shear wave velocity (Vs) for the soil layers. The soil was modeled using Pressure-Dependent Modified Kodner Zelasko (MKZ) model and the reference curve used for fitting was Seed and Idriss (1970) for sand and Darendeli (2001) for clay. Both Equivalent linear (EL), as well as Non-linear (NL) ground response analysis, has been carried out with Masing Hysteretic Re/Unloading formulation for comparison. Commercially available DEEPSOIL v. 7.0 software is used for this analysis. In this study an attempt is made to quantify ground response regarding generated acceleration time-history at top of the soil column, Response spectra calculation at 5 % damping and Fourier amplitude spectrum calculation. Moreover, the variation of Peak Ground Acceleration (PGA), Maximum Displacement, Maximum Strain (in %), Maximum Stress Ratio, Mobilized Shear Stress with depth is also calculated. From the study, PGA values estimated in rocky strata are nearly same as bedrock motion and marginal amplification is observed in sandy silt and silty clays by both analyses. The NL analysis gives conservative results of maximum displacement as compared to EL analysis. Maximum strain predicted by both studies is very close to each other. And overall NL analysis is more efficient and realistic because it follows the actual hyperbolic stress-strain relationship, considers stiffness degradation and mobilizes stresses generated due to pore water pressure.

Keywords: DEEPSOIL v 7.0, Ground Response Analysis, Pressure-Dependent Modified KodnerZelasko (MKZ) model, Response Spectra, Shear wave velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 887
35 The Efficiency of Mechanization in Weed Control in Artificial Regeneration of Oriental Beech (Fagus orientalis Lipsky.)

Authors: Tuğrul Varol, Halil Barış Özel

Abstract:

In this study which has been conducted in Akçasu Forest Range District of Devrek Forest Directorate; 3 methods (weed control with labourer power, cover removal with Hitachi F20 Excavator, and weed control with agricultural equipment mounted on a Ferguson 240S agriculture tractor) were utilized in weed control efforts in regeneration of degraded oriental beech forests have been compared. In this respect, 3 methods have been compared by determining certain work hours and standard durations of unit areas (1 hectare). For this purpose, evaluating the tasks made with human and machine force from the aspects of duration, productivity and costs, it has been aimed to determine the most productive method in accordance with the actual ecological conditions of research field. Within the scope of the study, the time studies have been conducted for 3 methods used in weed control efforts. While carrying out those studies, the performed implementations have been evaluated by dividing them into business stages. Also, the actual data have been used while calculating the cost accounts. In those calculations, the latest formulas and equations which are also used in developed countries have been utilized. The variance of analysis (ANOVA) was used in order to determine whether there is any statistically significant difference among obtained results, and the Duncan test was used for grouping if there is significant difference. According to the measurements and findings carried out within the scope of this study, it has been found during living cover removal efforts in regeneration efforts in demolished oriental beech forests that the removal of weed layer in 1 hectare of field has taken 920 hours with labourer force, 15.1 hours with excavator and 60 hours with an equipment mounted on a tractor. On the other hand, it has been determined that the cost of removal of living cover in unit area (1 hectare) was 3220.00 TL for labourer power, 1250 TL for excavator and 1825 TL for equipment mounted on a tractor. According to the obtained results, it has been found that the utilization of excavator in weed control effort in regeneration of degraded oriental beech regions under actual ecological conditions of research field has been found to be more productive from both of aspects of duration and costs. These determinations carried out should be repeated in weed control efforts in degraded forest fields with different ecological conditions, it is compulsory for finding the most efficient weed control method. These findings will light the way of technical staff of forestry directorate in determination of the most effective and economic weed control method. Thus, the more actual data will be used while preparing the weed control budgets, and there will be significant contributions to national economy. Also the results of this and similar studies are very important for developing the policies for our forestry in short and long term.

Keywords: Artificial regeneration, weed control, oriental beech, productivity, mechanization, man power, cost analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958
34 Logistical Optimization of Nuclear Waste Flows during Decommissioning

Authors: G. Dottavio, M. F. Andrade, F. Renard, V. Cheutet, A.-L. L. S. Vercraene, P. Hoang, S. Briet, R. Dachicourt, Y. Baizet

Abstract:

An important number of technological equipment and high-skilled workers over long periods of time have to be mobilized during nuclear decommissioning processes. The related operations generate complex flows of waste and high inventory levels, associated to information flows of heterogeneous types. Taking into account that more than 10 decommissioning operations are on-going in France and about 50 are expected toward 2025: A big challenge is addressed today. The management of decommissioning and dismantling of nuclear installations represents an important part of the nuclear-based energy lifecycle, since it has an environmental impact as well as an important influence on the electricity cost and therefore the price for end-users. Bringing new technologies and new solutions into decommissioning methodologies is thus mandatory to improve the quality, cost and delay efficiency of these operations. The purpose of our project is to improve decommissioning management efficiency by developing a decision-support framework dedicated to plan nuclear facility decommissioning operations and to optimize waste evacuation by means of a logistic approach. The target is to create an easy-to-handle tool capable of i) predicting waste flows and proposing the best decommissioning logistics scenario and ii) managing information during all the steps of the process and following the progress: planning, resources, delays, authorizations, saturation zones, waste volume, etc. In this article we present our results from waste nuclear flows simulation during decommissioning process, including discrete-event simulation supported by FLEXSIM 3-D software. This approach was successfully tested and our works confirms its ability to improve this type of industrial process by identifying the critical points of the chain and optimizing it by identifying improvement actions. This type of simulation, executed before the start of the process operations on the basis of a first conception, allow ‘what-if’ process evaluation and help to ensure quality of the process in an uncertain context. The simulation of nuclear waste flows before evacuation from the site will help reducing the cost and duration of the decommissioning process by optimizing the planning and the use of resources, transitional storage and expensive radioactive waste containers. Additional benefits are expected for the governance system of the waste evacuation since it will enable a shared responsibility of the waste flows.

Keywords: Nuclear decommissioning, logistical optimization, decision-support framework, waste management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501
33 Prominent Lipid Parameters Correlated with Trunk-to-Leg and Appendicular Fat Ratios in Severe Pediatric Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Alterations in lipid parameters as well as in the fat distribution of the body are noteworthy during the evaluation of obesity stages. Total cholesterol (TC), triglycerides (TRG), low density lipoprotein-cholesterol (LDL-C), high density lipoprotein-cholesterol (HDL-C) are basic lipid fractions. Fat deposited in trunk and extremities may give considerable amount of information. Ratios such as trunk-to-leg fat ratio (TLFR) and trunk-to-appendicular fat ratio (TAFR) are derived from distinct fat distribution in these areas. In this study, lipid fractions and TLFR as well as TAFR were evaluated and the distinctions among healthy, obese (OB) and morbid obese (MO) groups were investigated. Three groups [normal body mass index (N-BMI), OB, MO] were constituted. Ages and sexes of the groups were matched. The study protocol was approved by the Non-interventional Ethics Committee of Tekirdag Namik Kemal University. Written informed consent forms were obtained from the parents of the participants. Anthropometric measurements (height, weight, waist circumference, hip circumference, head circumference, neck circumference) were recorded during the physical examination. BMI values were calculated. Total, trunk, leg and arm fat mass values were obtained by TANITA Bioelectrical Impedance Analysis. These values were used to calculate TLFR and TAFR. Systolic (SBP) and diastolic blood pressures (DBP) were measured. Routine biochemical tests including lipid fractions were performed. Data were evaluated using SPSS software. p value smaller than 0.05 was accepted as significant. There was no difference among the age values and gender ratios of the groups. Any statistically significant difference was not observed in terms of DBP, TLFR as well as serum lipid fractions. Higher SBP values were measured both in OB and MO children than those with N-BMI. TAFR showed a significant difference between N-BMI and OB groups. Statistically significant increases were detected between insulin values of N-BMI group and OB as well as MO groups. There were bivariate correlations between LDL and TLFR as well as TAFR values in MO group. When adjusted for SBP and DBP, partial correlations were calculated for LDL-TLFR as well as LDL-TAFR. Much stronger partial correlations were obtained for the same couples upon controlling for TRG and HDL-C. Much stronger partial correlations observed in MO children emphasize the potential transition from morbid obesity to metabolic syndrome. These findings have concluded that LDL-C may be suggested as a discriminating parameter between OB and MO children.

Keywords: Children, lipid parameters, obesity, trunk-to-leg fat ratio, trunk-to-appendicular fat ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 302
32 Antimicrobial Properties of SEBS Compounds with Zinc Oxide and Zinc Ions

Authors: Douglas N. Simões, Michele Pittol, Vanda F. Ribeiro, Daiane Tomacheski, Ruth M. C. Santana

Abstract:

The increasing demand of thermoplastic elastomers is related to the wide range of applications, such as automotive, footwear, wire and cable industries, adhesives and medical devices, cell phones, sporting goods, toys and others. These materials are susceptible to microbial attack. Moisture and organic matter present in some areas (such as shower area and sink), provide favorable conditions for microbial proliferation, which contributes to the spread of diseases and reduces the product life cycle. Compounds based on SEBS copolymers, poly(styrene-b-(ethylene-co-butylene)-b-styrene, are a class of thermoplastic elastomers (TPE), fully recyclable and largely used in domestic appliances like bath mats and tooth brushes (soft touch). Zinc oxide and zinc ions loaded in personal and home care products have become common in the last years due to its biocidal effect. In that sense, the aim of this study was to evaluate the effect of zinc as antimicrobial agent in compounds based on SEBS/polypropylene/oil/ calcite for use as refrigerator seals (gaskets), bath mats and sink squeegee. Two zinc oxides from different suppliers (ZnO-Pe and ZnO-WR) and one masterbatch of zinc ions (M-Zn-ion) were used in proportions of 0%, 1%, 3% and 5%. The compounds were prepared using a co-rotating double screw extruder (L/D ratio of 40/1 and 16 mm screw diameter). The extrusion parameters were kept constant for all materials. Tests specimens were prepared using the injection molding machine. A compound with no antimicrobial additive (standard) was also tested. Compounds were characterized by physical (density), mechanical (hardness and tensile properties) and rheological properties (melt flow rate - MFR). The Japan Industrial Standard (JIS) Z 2801:2010 was applied to evaluate antibacterial properties against Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli). The Brazilian Association of Technical Standards (ABNT) NBR 15275:2014 were used to evaluate antifungal properties against Aspergillus niger (A. niger), Aureobasidium pullulans (A. pullulans), Candida albicans (C. albicans), and Penicillium chrysogenum (P. chrysogenum). The microbiological assay showed a reduction over 42% in E. coli and over 49% in S. aureus population. The tests with fungi showed inconclusive results because the sample without zinc also demonstrated an inhibition of fungal development when tested against A. pullulans, C. albicans and P. chrysogenum. In addition, the zinc loaded samples showed worse results than the standard sample when tested against A. niger. The zinc addition did not show significant variation in mechanical properties. However, the density values increased with the rise in ZnO additives concentration, and had a little decrease in M-Zn-ion samples. Also, there were differences in the MFR results in all compounds compared to the standard.

Keywords: Antimicrobial, home device, SEBS, zinc.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2010
31 An Analysis of Gamification in the Post-Secondary Classroom

Authors: F. Saccucci

Abstract:

Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.

Keywords: Gamification, inexpensive, qualitative advantages, post-secondary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
30 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1133
29 Response of Local Cowpea to Intra Row Spacing and Weeding Regimes in Yobe State, Nigeria

Authors: A. G. Gashua, T. T. Bello, I. Alhassan, K. K. Gwiokura

Abstract:

Weeds are known to interfere seriously with crop growth, thereby affecting the productivity and quality of crops. Crops are also known to compete for natural growth resources if they are not adequately spaced, also affecting the performance of the growing crop. Farmers grow cowpea in mixtures with cereals and this is known to affect its yield. For this reason, a field experiment was conducted at Yobe State College of Agriculture Gujba, Damaturu station in the 2014 and 2015 rainy seasons to determine the appropriate intra row spacing and weeding regime for optimum growth and yield of cowpea (Vigna unguiculata L.) in pure stand in Sudan Savanna ecology. The treatments consist of three levels of spacing within rows (20 cm, 30 cm and 40 cm) and four weeding regimes (none, once at 3 weeks after sowing (WAS), twice at 3 and 6WAS, thrice at 3WAS, 6WAS and 9WAS); arranged in a Randomized Complete Block Design (RCBD) and replicated three times. The variety used was the local cowpea variety (white, early and spreading) commonly grown by farmers. The growth and yield data were collected and subjected to analysis of variance using SAS software, and the significant means were ranked by Students Newman Keul’s test (SNK). The findings of this study revealed better crop performance in 2015 than in 2014 despite poor soil condition. Intra row spacing significantly influenced vegetative growth especially the number of main branches, leaves and canopy spread at 6WAS and 9WAS with the highest values obtained at wider spacing (40 cm). The values obtained in 2015 doubled those obtained in 2014 in most cases. Spacing also significantly affected the number of pods in 2015, seed weight in both years and grain yield in 2014 with the highest values obtained when the crop was spaced at 30-40 cm. Similarly, weeding regime significantly influenced almost all the growth attributes of cowpea with higher values obtained from where cowpea was weeded three times at 3-week intervals, though statistically similar results were obtained even from where cowpea was weeded twice. Weeding also affected the entire yield and yield components in 2015 with the highest values obtained with increase weeding. Based on these findings, it is recommended that spreading cowpea varieties should be grown at 40 cm (or wider spacing) within rows and be weeded twice at three-week intervals for better crop performance in related ecologies.

Keywords: Intra row spacing, local cowpea, Nigeria, weeding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 824
28 Calcium Biochemical Indicators in a Group of Schoolchildren with Low Socioeconomic Status from Barranquilla, Colombia

Authors: Carmiña L. Vargas-Zapata, María A. Conde-Sarmiento, Maria Consuelo Maestre-Vargas

Abstract:

Calcium is an essential element for good growth and development of the organism, and its requirement is increased at school age. Low socio-economic populations of developing countries such as Colombia may have food deficiency of this mineral in schoolchildren that could be reflected in calcium biochemical indicators, bone alterations and anthropometric indicators. The objective of this investigation was to evaluate some calcium biochemical indicators in a group of schoolchildren of low socioeconomic level from Barranquilla city and to correlate with body mass index. 60 schoolchildren aged 7 to 15 years were selected from Jesus’s Heart Educational Institution in Barranquilla-Atlántico, apparently healthy, without suffering from infectious or gastrointestinal diseases, without habits of drinking alcohol or smoking another hallucinogenic substance and without taking supplementation with calcium in the last six months or another substance that compromises bone metabolism. The research was approved by the ethics committee at Universidad del Atlántico. The selected children were invited to donate a blood and urine sample in a fasting time of 12 hours, the serum was separated by centrifugation and frozen at ˗20 ℃ until analyzed and the same was done with the urine sample. On the day of the biological collections, the weight and height of the students were measured to determine the nutritional status by BMI using the WHO tables. Calcium concentrations in serum and urine (SCa, UCa), alkaline phosphatase activity total and of bone origin (SAPT, SBAP) and urinary creatinine (UCr) were determined by spectrophotometric methods using commercial kits. Osteocalcin and Cross-linked N-telopeptides of type I collagen (NTx-1) in serum were measured with an enzyme-linked inmunosorbent assay. For statistical analysis the Statgraphics software Centurium XVII was used. 63% (n = 38) and 37% (n = 22) of the participants were male and female, respectively. 78% (n = 47), 5% (n = 3) and 17% (n = 10) had a normal, malnutrition and high nutritional status, respectively. The averages of evaluated indicators levels were (mean ± SD): 9.50 ± 1.06 mg/dL for SCa; 181.3 ± 64.3 U/L for SAPT, 143.8 ± 73.9 U/L for SBAP; 9.0 ± 3.48 ng/mL for osteocalcin and 101.3 ± 12.8 ng/mL for NTx-1. UCa level was 12.8 ± 7.7 mg/dL that adjusted with creatinine ranged from 0.005 to 0.395 mg/mg. Considering serum calcium values, approximately 7% of school children were hypocalcemic, 16% hypercalcemic and 77% normocalcemic. The indicators evaluated did not correlate with the BMI. Low values ​​were observed in calcium urinary excretion and high in NTx-1, suggesting that mechanisms such as increase in renal retention of calcium and in bone remodeling may be contributing to calcium homeostasis.

Keywords: Calcium, calcium biochemical, indicators, school children, low socioeconomic status.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 470
27 The Development and Testing of a Small Scale Dry Electrostatic Precipitator for the Removal of Particulate Matter

Authors: Derek Wardle, Tarik Al-Shemmeri, Neil Packer

Abstract:

This paper presents a small tube/wire type electrostatic precipitator (ESP). In the ESPs present form, particle charging and collecting voltages and airflow rates were individually varied throughout 200 ambient temperature test runs ranging from 10 to 30 kV in increments on 5 kV and 0.5 m/s to 1.5 m/s, respectively. It was repeatedly observed that, at input air velocities of between 0.5 and 0.9 m/s and voltage settings of 20 kV to 30 kV, the collection efficiency remained above 95%. The outcomes of preliminary tests at combustion flue temperatures are, at present, inconclusive although indications are that there is little or no drop in comparable performance during ideal test conditions. A limited set of similar tests was carried out during which the collecting electrode was grounded, having been disconnected from the static generator. The collecting efficiency fell significantly, and for that reason, this approach was not pursued further. The collecting efficiencies during ambient temperature tests were determined by mass balance between incoming and outgoing dry PM. The efficiencies of combustion temperature runs are determined by analysing the difference in opacity of the flue gas at inlet and outlet compared to a reference light source. In addition, an array of Leit tabs (carbon coated, electrically conductive adhesive discs) was placed at inlet and outlet for a number of four-day continuous ambient temperature runs. Analysis of the discs’ contamination was carried out using scanning electron microscopy and ImageJ computer software that confirmed collection efficiencies of over 99% which gave unequivocal support to all the previous tests. The average efficiency for these runs was 99.409%. Emissions collected from a woody biomass combustion unit, classified to a diameter of 100 µm, were used in all ambient temperature trials test runs apart from two which collected airborne dust from within the laboratory. Sawdust and wood pellets were chosen for laboratory and field combustion trials. Video recordings were made of three ambient temperature test runs in which the smoke from a wood smoke generator was drawn through the precipitator. Although these runs were visual indicators only, with no objective other than to display, they provided a strong argument for the device’s claimed efficiency, as no emissions were visible at exit when energised.  The theoretical performance of ESPs, when applied to the geometry and configuration of the tested model, was compared to the actual performance and was shown to be in good agreement with it.

Keywords: Electrostatic precipitators, air quality, particulates emissions, electron microscopy, ImageJ.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093
26 Decision Support System for Hospital Selection in Emergency Medical Services: A Discrete Event Simulation Approach

Authors: D. Tedesco, G. Feletti, P. Trucco

Abstract:

The present study aims to develop a Decision Support System (DSS) to support operational decisions in Emergency Medical Service (EMS) systems regarding the assignment of medical emergency requests to Emergency Departments (ED). This problem is called “hospital selection” and concerns the definition of policies for the selection of the ED to which patients who require further treatment are transported by ambulance. The employed research methodology consists of a first phase of review of the technical-scientific literature concerning DSSs to support the EMS management and, in particular, the hospital selection decision. From the literature analysis, it emerged that current studies mainly focused on the EMS phases related to the ambulance service and consider a process that ends when the ambulance is available after completing a mission. Therefore, all the ED-related issues are excluded and considered as part of a separate process. Indeed, the most studied hospital selection policy turned out to be proximity, thus allowing to minimize the travelling time and to free-up the ambulance in the shortest possible time. The purpose of the present study consists in developing an optimization model for assigning medical emergency requests to the EDs also considering the expected time performance in the subsequent phases of the process, such as the case mix, the expected service throughput times, and the operational capacity of different EDs in hospitals. To this end, a Discrete Event Simulation (DES) model was created to compare different hospital selection policies. The model was implemented with the AnyLogic software and finally validated on a realistic case. The hospital selection policy that returned the best results was the minimization of the Time To Provider (TTP), considered as the time from the beginning of the ambulance journey to the ED at the beginning of the clinical evaluation by the doctor. Finally, two approaches were further compared: a static approach, based on a retrospective estimation of the TTP, and a dynamic approach, focused on a predictive estimation of the TTP which is determined with a constantly updated Winters forecasting model. Findings reveal that considering the minimization of TTP is the best hospital selection policy. It allows to significantly reducing service throughput times in the ED with a negligible increase in travel time. Furthermore, an immediate view of the saturation state of the ED is produced and the case mix present in the ED structures (i.e., the different triage codes) is considered, as different severity codes correspond to different service throughput times. Besides, the use of a predictive approach is certainly more reliable in terms on TTP estimation, than a retrospective approach. These considerations can support decision-makers in introducing different hospital selection policies to enhance EMSs performance.

Keywords: Emergency medical services, hospital selection, discrete event simulation, forecast model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 155
25 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion

Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan

Abstract:

In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.

Keywords: Accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 849
24 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China

Authors: Feng Yue, Fei Dai

Abstract:

With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.

Keywords: Landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 780
23 Digital Twins: Towards an Overarching Framework for the Built Environment

Authors: Astrid Bagireanu, Julio Bros-Williamson, Mila Duncheva, John Currie

Abstract:

Digital Twins (DTs) have entered the built environment from more established industries like aviation and manufacturing, although there has never been a common goal for utilising DTs at scale. Their assimilation into the built environment lacked its very own handover documentation: how should DTs be implemented into a project and what responsibilities should each project stakeholder hold in the realisation of a DT vision. What is needed is an approach to translate these requirements into actionable DT dimensions. This paper presents a foundation for an overarching framework specific to the built environment. For the purposes of this research, the project timeline is established by referencing the Royal Institute of British Architects (RIBA) Plan of Work from 2020, providing a foundation for delineating project stages. The RIBA Plan of Work consists of eight stages designed to inform on the definition, briefing, design, coordination, construction, handover, and use of a built asset. Similar project stages are utilised in other countries; therefore, the recommendations from the interviews presented in this paper are applicable internationally. Simultaneously, there is not a single mainstream software resource that leverages DT abilities. This ambiguity meets an unparalleled ambition from governments and industries worldwide to achieve a national grid of interconnected DTs. For the construction industry to access these benefits, it necessitates a defined starting point. This research aims to provide a comprehensive understanding of the potential applications and ramifications of DT in the context of the built environment. This paper is an integral part of a larger research aimed at developing a conceptual framework for the Architecture, Engineering, and Construction (AEC) sector following a conventional project timeline. Therefore, this paper plays a pivotal role in providing practical insights and a tangible foundation for developing a stage-by-stage approach to assimilate the potential of DT within the built environment. First, the research focuses on a review of relevant literature, albeit acknowledging the inherent constraint of limited sources available. Secondly, a qualitative study compiling the views of 14 DT experts is presented, concluding with an inductive analysis of the interview findings - ultimately highlighting the barriers and strengths of DT in the context of framework development. As parallel developments aim to progress net-zero-centred design and improve project efficiencies across the built environment, the limited resources available to support DTs should be leveraged to propel the industry to reach its digitalisation era, in which AEC stakeholders have a fundamental role in understanding this, from the earliest stages of a project.

Keywords: Digital twins, decision making, design, net-zero, built environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 211
22 Analysis of Non-Conventional Roundabout Performance in Mixed Traffic Conditions

Authors: Guneet Saini, Shahrukh, Sunil Sharma

Abstract:

Traffic congestion is the most critical issue faced by those in the transportation profession today. Over the past few years, roundabouts have been recognized as a measure to promote efficiency at intersections globally. In developing countries like India, this type of intersection still faces a lot of issues, such as bottleneck situations, long queues and increased waiting times, due to increasing traffic which in turn affect the performance of the entire urban network. This research is a case study of a non-conventional roundabout, in terms of geometric design, in a small town in India. These types of roundabouts should be analyzed for their functionality in mixed traffic conditions, prevalent in many developing countries. Microscopic traffic simulation is an effective tool to analyze traffic conditions and estimate various measures of operational performance of intersections such as capacity, vehicle delay, queue length and Level of Service (LOS) of urban roadway network. This study involves analyzation of an unsymmetrical non-circular 6-legged roundabout known as “Kala Aam Chauraha” in a small town Bulandshahr in Uttar Pradesh, India using VISSIM simulation package which is the most widely used software for microscopic traffic simulation. For coding in VISSIM, data are collected from the site during morning and evening peak hours of a weekday and then analyzed for base model building. The model is calibrated on driving behavior and vehicle parameters and an optimal set of calibrated parameters is obtained followed by validation of the model to obtain the base model which can replicate the real field conditions. This calibrated and validated model is then used to analyze the prevailing operational traffic performance of the roundabout which is then compared with a proposed alternative to improve efficiency of roundabout network and to accommodate pedestrians in the geometry. The study results show that the alternative proposed is an advantage over the present roundabout as it considerably reduces congestion, vehicle delay and queue length and hence, successfully improves roundabout performance without compromising on pedestrian safety. The study proposes similar designs for modification of existing non-conventional roundabouts experiencing excessive delays and queues in order to improve their efficiency especially in the case of developing countries. From this study, it can be concluded that there is a need to improve the current geometry of such roundabouts to ensure better traffic performance and safety of drivers and pedestrians negotiating the intersection and hence this proposal may be considered as a best fit.

Keywords: Operational performance, roundabout, simulation, VISSIM, traffic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 715
21 Outcomes of Pregnancy in Women with TPO Positive Status after Appropriate Dose Adjustments of Thyroxin: A Prospective Cohort Study

Authors: Revathi S. Rajan, Pratibha Malik, Nupur Garg, Smitha Avula, Kamini A. Rao

Abstract:

This study aimed to analyse the pregnancy outcomes in patients with TPO positivity after appropriate L-Thyroxin supplementation with close surveillance. All pregnant women attending the antenatal clinic at Milann-The Fertility Center, Bangalore, India- from Aug 2013 to Oct 2014 whose booking TSH was more than 2.5 mIU/L were included along with those pregnant women with prior hypothyroidism who were TPO positive. Those with TPO positive status were vigorously managed with appropriate thyroxin supplementation and the doses were readjusted every 3 to 4 weeks until delivery. Women with recurrent pregnancy loss were also tested for TPO positivity and if tested positive, were monitored serially with TSH and fT4 levels every 3 to 4 weeks and appropriately supplemented with thyroxin when the levels fluctuated. The testing was done after an informed consent in all these women. The statistical software namely SAS 9.2, SPSS 15.0, Stata 10.1, MedCalc 9.0.1, Systat 12.0 and R environment ver.2.11.1 were used for the analysis of the data. 460 pregnant women were screened for thyroid dysfunction at booking of which 52% were hypothyroid. Majority of them (31.08%) were subclinically hypothyroid and the remaining were overt. 25% of the total no. of patients screened were TPO positive. The various pregnancy complications that were observed in the TPO positive women were gestational glucose intolerance [60%], threatened abortion [21%], midtrimester abortion [4.3%], premature rupture of membranes [4.3%], cervical funneling [4.3%] and fetal growth restriction [3.5%]. 95.6% of the patients who followed up till the end delivered beyond 30 weeks. 42.6% of these patients had previous history of recurrent abortions or adverse obstetric outcome and 21.7% of the delivered babies required NICU admission. Obstetric outcomes in our study in terms of midtrimester abortions, placental abruption, and preterm delivery improved for the better after close monitoring of the thyroid hormone [TSH and fT4] levels every 3 to 4 weeks with appropriate dose adjustment throughout pregnancy. Euthyroid women with TPO positive status enrolled in the study incidentally were those with recurrent abortions/infertility and required thyroxin supplements due to elevated Thyroid hormone (TSH, fT4) levels during the course of their pregnancy. Significant associations were found with age>30 years and Hyperhomocysteinemia [p=0.017], recurrent pregnancy loss or previous adverse obstetric outcomes [p=0.067] and APLA [p=0.029]. TPO antibody levels >600 I U/ml were significantly associated with development of gestational hypertension [p=0.041] and fetal growth restriction [p=0.082]. Euthyroid women with TPO positivity were also screened periodically to counter fluctuations of the thyroid hormone levels with appropriate thyroxin supplementation. Thus, early identification along with aggressive management of thyroid dysfunction and stratification of these patients based on their TPO status with appropriate thyroxin supplementation beginning in the first trimester will aid risk modulation and also help avert complications.

Keywords: Antinuclear antibody, Subclinical hypothyroidism, Thyroxin, TPO antibody.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663
20 An Empirical Quest for Linkages between HPWS and Employee Behaviors – a Perspective from the Non Managerial Employees in Japanese Organizations

Authors: Kaushik Chaudhuri

Abstract:

High Performance Work Systems (HPWS) generally give rise to positive impacts on employees by increasing their commitments in workplaces. While some argued this actually have considerable negative impacts on employees with increasing possibilities of imposing strains caused by stress and intensity of such work places. Do stressful workplaces hamper employee commitment? The author has tried to find the answer by exploring linkages between HPWS practices and its impact on employees in Japanese organizations. How negative outcomes like job intensity and workplaces and job stressors can influence different forms of employees- commitments which can be a hindrance to their performance. Design: A close ended questionnaire survey was conducted amongst 16 large, medium and small sized Japanese companies from diverse industries around Chiba, Saitama, and Ibaraki Prefectures and in Tokyo from the month of October 2008 to February 2009. Questionnaires were aimed to the non managerial employees- perceptions of HPWS practices, their behavior, working life experiences in their work places. A total of 227 samples are used for analysis in the study. Methods: Correlations, MANCOVA, SEM Path analysis using AMOS software are used for data analysis in this study. Findings: Average non-managerial perception of HPWS adoption is significantly but negatively correlated to both work place Stressors and Continuous commitment, but positively correlated to job Intensity, Affective, Occupational and Normative commitments in different workplaces at Japan. The path analysis by SEM shows significant indirect relationship between Stressors and employee Affective organizational commitment and Normative organizational commitments. Intensity also has a significant indirect effect on Occupational commitments. HPWS has an additive effect on all the outcomes variables. Limitations: The sample size in this study cannot be a representative to the entire population of non-managerial employees in Japan. There were no respondents from automobile, pharmaceuticals, finance industries. The duration of the survey coincided in a period when Japan as most of the other countries is under going recession. Biases could not be ruled out completely. We must take cautions in interpreting the results of studies as they cannot be generalized. And the path analysis cannot provide the complete causality of the inter linkages between the variables used in the study. Originality: There have been limited studies on linkages in HPWS adoptions and their impacts on employees- behaviors and commitments in Japanese workplaces. This study may provide some ingredients for further research in the fields of HRM policies and practices and their linkages on different forms of employees- commitments.

Keywords: HPWS, Job Intensity, Job and workplace Stressors, Continuous commitment, Affective commitment, Occupational commitment, Japan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2202
19 Study on Metabolic and Mineral Balance, Oxidative Stress and Cardiovascular Risk Factors in Type 2 Diabetic Patients on Different Therapy

Authors: E. Nemes-Nagy, E. Fogarasi, M. Croitoru, A. Nyárádi, K. Komlódi, S. Pál, A. Kovács, O. Kopácsy, R. Tripon, Z. Fazakas, C. Uzun, Z. Simon-Szabó, V. Balogh-Sămărghițan, E. Ernő Nagy, M. Szabó, M. Tilinca

Abstract:

Intense oxidative stress, increased glycated hemoglobin and mineral imbalance represent risk factors for complications in diabetic patients. Cardiovascular complications are most common in these patients, including nephropathy. This study was conducted in 2015 at the Procardia Laboratory in Tîrgu Mureș, Romania on 40 type 2 diabetic adults. Routine biochemical tests were performed on the Konleab 20XTi analyzer (serum glucose, total cholesterol, LDL and HDL cholesterol, triglyceride, creatinine, urea). We also measured serum uric acid, magnesium and calcium concentration by photometric procedures, potassium, sodium and chloride by ion selective electrode, and chromium by atomic absorption spectrometry in a group of patients. Glycated hemoglobin (HbA1c) dosage was made by reflectometry. Urine analysis was performed using the HandUReader equipment. The level of oxidative stress was measured by serum malondialdehyde dosage using the thiobarbituric acid reactive substances method. MDRD (Modification of Diet in Renal Disease) formula was applied for calculation of creatinine-derived glomerular filtration rate. GraphPad InStat software was used for statistical analysis of the data. The diabetic subject included in the study presented high MDA concentrations, showing intense oxidative stress. Calcium was deficient in 5% of the patients, chromium deficiency was present in 28%. The atherogenic cholesterol fraction was elevated in 13% of the patients. Positive correlation was found between creatinine and MDRD-creatinine values (p<0.0001), 68% of the patients presented increased creatinine values. The majority of the diabetic patients had good control of their diabetes, having optimal HbA1c values, 35% of them presented fasting serum glucose over 120 mg/dl and 18% had glucosuria. Intense oxidative stress and mineral deficiencies can increase the risk of cardiovascular complications in diabetic patients in spite of their good metabolic balance. More than two third of the patients present biochemical signs of nephropathy, cystatin C dosage and microalbuminuria could reveal better the kidney disorder, but glomerular filtration rate calculation formulas are also useful for evaluation of renal function.

Keywords: Cardiovascular risk, malondialdehyde, metabolic balance, minerals, type 2 diabetes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628
18 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: Autonomous surveillance, Bayesian reasoning, decision-support, interventions, patterns-of-life, predictive analytics, predictive insights.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 484
17 Use of Curcumin in Radiochemotherapy Induced Oral Mucositis Patients: A Control Trial Study

Authors: Shivayogi Charantimath

Abstract:

Radiotherapy and chemotherapy are effective for treating malignancies but are associated with side effects like oral mucositis. Chlorhexidine gluconate is one of the most commonly used mouthwash in prevention of signs and symptoms of mucositis. Evidence shows that chlorhexidine gluconate has side effects in terms of colonization of bacteria, bad breadth and less healing properties. Thus, it is essential to find a suitable alternative therapy which is more effective with minimal side effects. Curcumin, an extract of turmeric is gradually being studied for its wide-ranging therapeutic properties such as antioxidant, analgesic, anti-inflammatory, antitumor, antimicrobial, antiseptic, chemo sensitizing and radio sensitizing properties. The present study was conducted to evaluate the efficacy and safety of topical curcumin gel on radio-chemotherapy induced oral mucositis in cancer patients. The aim of the study is to evaluate the efficacy and safety of curcumin gel in the management of oral mucositis in cancer patients undergoing radio chemotherapy and compare with chlorhexidine. The study was conducted in K.L.E. Society’s Belgaum cancer hospital. 40 oral cancer patients undergoing the radiochemotheraphy with oral mucositis was selected and randomly divided into two groups of 20 each. The study group A [20 patients] was advised Cure next gel for 2 weeks. The control group B [20 patients] was advised chlorhexidine gel for 2 weeks. The NRS, Oral Mucositis Assessment scale and WHO mucositis scale were used to determine the grading. The results obtained were calculated by using SPSS 20 software. The comparison of grading was done by applying Mann-Whitney U test and intergroup comparison was calculated by Wilcoxon matched pairs test. The NRS scores observed from baseline to 1st and 2nd week follow up in both the group showed significant difference. The percentage of change in erythema in respect to group A was 63.3% for first week and for second week, changes were 100.0% with p = 0.0003. The changes in Group A in respect to erythema was 34.6% for 1st week and 57.7% in second week. The intergroup comparison was significant with p value of 0.0048 and 0.0006 in relation to group A and group B respectively. The size of the ulcer score was measured which showed 35.5% [P=0.0010] of change in Group A for 1st and 2nd week showed totally reduction i.e. 103.4% [P=0.0001]. Group B showed 24.7% change from baseline to 1st week and 53.6% for 2nd week follow up. The intergroup comparison with Wilcoxon matched pair test was significant with p=0.0001 in group A. The result obtained by WHO mucositis score in respect to group A shows 29.6% [p=0.0004] change in first week and 75.0% [p=0.0180] change in second week which is highly significant in comparison to group B. Group B showed minimum changes i.e. 20.1% in 1st week and 33.3% in 2nd week. The p value with Wilcoxon was significant with 0.0025 in Group A for 1st week follow up and 0.000 for 2nd week follow up. Curcumin gel appears to an effective and safer alternative to chlorhexidine gel in treatment of oral mucositis.

Keywords: Curcumin, chemotherapy, mucositis, radiotherapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2079
16 Use of Locomotor Activity of Rainbow Trout Juveniles in Identifying Sublethal Concentrations of Landfill Leachate

Authors: Tomas Makaras, Gintaras Svecevičius

Abstract:

Landfill waste is a common problem as it has an economic and environmental impact even if it is closed. Landfill waste contains a high density of various persistent compounds such as heavy metals, organic and inorganic materials. As persistent compounds are slowly-degradable or even non-degradable in the environment, they often produce sublethal or even lethal effects on aquatic organisms. The aims of the present study were to estimate sublethal effects of the Kairiai landfill (WGS: 55°55‘46.74“, 23°23‘28.4“) leachate on the locomotor activity of rainbow trout Oncorhynchus mykiss juveniles using the original system package developed in our laboratory for automated monitoring, recording and analysis of aquatic organisms’ activity, and to determine patterns of fish behavioral response to sublethal effects of leachate. Four different concentrations of leachate were chosen: 0.125; 0.25; 0.5 and 1.0 mL/L (0.0025; 0.005; 0.01 and 0.002 as part of 96-hour LC50, respectively). Locomotor activity was measured after 5, 10 and 30 minutes of exposure during 1-minute test-periods of each fish (7 fish per treatment). The threshold-effect-concentration amounted to 0.18 mL/L (0.0036 parts of 96-hour LC50). This concentration was found to be even 2.8-fold lower than the concentration generally assumed to be “safe” for fish. At higher concentrations, the landfill leachate solution elicited behavioral response of test fish to sublethal levels of pollutants. The ability of the rainbow trout to detect and avoid contaminants occurred after 5 minutes of exposure. The intensity of locomotor activity reached a peak within 10 minutes, evidently decreasing after 30 minutes. This could be explained by the physiological and biochemical adaptation of fish to altered environmental conditions. It has been established that the locomotor activity of juvenile trout depends on leachate concentration and exposure duration. Modeling of these parameters showed that the activity of juveniles increased at higher leachate concentrations, but slightly decreased with the increasing exposure duration. Experiment results confirm that the behavior of rainbow trout juveniles is a sensitive and rapid biomarker that can be used in combination with the system for fish behavior monitoring, registration and analysis to determine sublethal concentrations of pollutants in ambient water. Further research should be focused on software improvement aimed to include more parameters of aquatic organisms’ behavior and to investigate the most rapid and appropriate behavioral responses in different species. In practice, this study could be the basis for the development and creation of biological early-warning systems (BEWS).

Keywords: Fish behavior biomarker, landfill leachate, locomotor activity, rainbow trout juveniles, sublethal effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1801
15 Research of the Factors Affecting the Administrative Capacity of Enterprises in the Logistic Sector of Bulgaria

Authors: R. Kenova, K. Anguelov, R. Nikolova

Abstract:

The human factor plays a major role in boosting the competitive capacity of logistic enterprises. This is of particular importance when it comes to logistic companies. On the one hand they should be strictly compliant with legislation; on the other hand, they should be competitive in terms of pricing and of delivery timelines. Moreover, their policies should allow them to be as flexible as possible. All these circumstances are reason for very serious challenges for the qualification, motivation and experience of the human resources, working in logistic companies or in logistic departments of trade and industrial enterprises. The geographic place of Bulgaria puts it in position of a country with some specific competitive advantages in the goods transport from Europe to Asia and back. Along with it, there is a number of logistic companies, that operate in this sphere in Bulgaria. In the current paper, the authors aim to establish the condition of the administrative capacity and human resources in the logistic companies and logistic departments of trade and industrial companies in Bulgaria in order to propose some guidelines for improving of their effectiveness. Due to independent empirical research, conducted in Bulgarian logistic, trade and industrial enterprises, the authors investigate both the impact degree and the interdependence of various factors that characterize the administrative capacity. The study is conducted with a prepared questionnaire, in format of direct interview with the respondents. The volume of the poll is 50 respondents, representatives of: general managers of industrial or trade enterprises; logistic managers of industrial or trade enterprises; general managers of forwarding companies – either with own or with hired transport; experts from Bulgarian association of logistics; logistic lobbyist and scientists of the relevant area. The data are gathered for 3 months, then arranged by a specialized software program and analyzed by preset criteria. Based on the results of this methodological toolbox, it can be claimed that there is a correlation between the individual criteria. Also, a commitment between the administrative capacity and other factors that determine the competitiveness of the studied companies is established. In this paper, the authors present results of the empirical research that concerns the number and the workload in the logistic departments of the enterprises. Also, what is commented is the experience, related to logistic processes management and human resources competence. Moreover, the overload level of the logistic specialists is analyzed as one of the main threats for making mistakes and losing clients. The paper stands behind the thesis that there is indispensability of forming an effective and efficient administrative capacity, based on the number, qualification, experience and motivation of the staff in the logistic companies. The paper ends with recommendations about the qualification and experience of the specialists in logistic departments; providing effective and efficient administrative capacity in the logistic departments; interdependence of the human factor and the other factors that influence the enterprise competitiveness.

Keywords: Administrative capacity, human resources, logistic competitiveness, staff qualification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 577
14 Parental Attitudes as a Predictor of Cyber Bullying among Primary School Children

Authors: Bülent Dilmaç, Didem Aydoğan

Abstract:

Problem Statement:Rapid technological developments of the 21st century have advanced our daily lives in various ways. Particularly in education, students frequently utilize technological resources to aid their homework and to access information. listen to radio or watch television (26.9 %) and e-mails (34.2 %) [26]. Not surprisingly, the increase in the use of technologies also resulted in an increase in the use of e-mail, instant messaging, chat rooms, mobile phones, mobile phone cameras and web sites by adolescents to bully peers. As cyber bullying occurs in the cyber space, lesser access to technologies would mean lesser cyber-harm. Therefore, the frequency of technology use is a significant predictor of cyber bullying and cyber victims. Cyber bullies try to harm the victim using various media. These tools include sending derogatory texts via mobile phones, sending threatening e-mails and forwarding confidential emails to everyone on the contacts list. Another way of cyber bullying is to set up a humiliating website and invite others to post comments. In other words, cyber bullies use e-mail, chat rooms, instant messaging, pagers, mobile texts and online voting tools to humiliate and frighten others and to create a sense of helplessness. No matter what type of bullying it is, it negatively affects its victims. Children who bully exhibit more emotional inhibition and attribute themselves more negative self-statements compared to non-bullies. Students whose families are not sympathetic and who receive lower emotional support are more prone to bully their peers. Bullies have authoritarian families and do not get along well with them. The family is the place where the children-s physical, social and psychological needs are satisfied and where their personalities develop. As the use of the internet became prevalent so did parents- restrictions on their children-s internet use. However, parents are unaware of the real harm. Studies that explain the relationship between parental attitudes and cyber bullying are scarce in literature. Thus, this study aims to investigate the relationship between cyber bullying and parental attitudes in the primary school. Purpose of Study: This study aimed to investigate the relationship between cyber bullying and parental attitudes. A second aim was to determine whether parental attitudes could predict cyber bullying and if so which variables could predict it significantly. Methods:The study had a cross-sectional and relational survey model. A demographics information form, questions about cyber bullying and a Parental Attitudes Inventory were conducted with a total of 346 students (189 females and 157 males) registered at various primary schools. Data was analysed by multiple regression analysis using the software package SPSS 16.

Keywords: Cyber bullying, cyber victim, parental attitudes, primary school students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3790
13 Life Cycle Datasets for the Ornamental Stone Sector

Authors: Isabella Bianco, Gian Andrea Blengini

Abstract:

The environmental impact related to ornamental stones (such as marbles and granites) is largely debated. Starting from the industrial revolution, continuous improvements of machineries led to a higher exploitation of this natural resource and to a more international interaction between markets. As a consequence, the environmental impact of the extraction and processing of stones has increased. Nevertheless, if compared with other building materials, ornamental stones are generally more durable, natural, and recyclable. From the scientific point of view, studies on stone life cycle sustainability have been carried out, but these are often partial or not very significant because of the high percentage of approximations and assumptions in calculations. This is due to the lack, in life cycle databases (e.g. Ecoinvent, Thinkstep, and ELCD), of datasets about the specific technologies employed in the stone production chain. For example, databases do not contain information about diamond wires, chains or explosives, materials commonly used in quarries and transformation plants. The project presented in this paper aims to populate the life cycle databases with specific data of specific stone processes. To this goal, the methodology follows the standardized approach of Life Cycle Assessment (LCA), according to the requirements of UNI 14040-14044 and to the International Reference Life Cycle Data System (ILCD) Handbook guidelines of the European Commission. The study analyses the processes of the entire production chain (from-cradle-to-gate system boundaries), including the extraction of benches, the cutting of blocks into slabs/tiles and the surface finishing. Primary data have been collected in Italian quarries and transformation plants which use technologies representative of the current state-of-the-art. Since the technologies vary according to the hardness of the stone, the case studies comprehend both soft stones (marbles) and hard stones (gneiss). In particular, data about energy, materials and emissions were collected in marble basins of Carrara and in Beola and Serizzo basins located in the province of Verbano Cusio Ossola. Data were then elaborated through an appropriate software to build a life cycle model. The model was realized setting free parameters that allow an easy adaptation to specific productions. Through this model, the study aims to boost the direct participation of stone companies and encourage the use of LCA tool to assess and improve the stone sector environmental sustainability. At the same time, the realization of accurate Life Cycle Inventory data aims at making available, to researchers and stone experts, ILCD compliant datasets of the most significant processes and technologies related to the ornamental stone sector.

Keywords: LCA datasets, life cycle assessment, ornamental stone, stone environmental impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1106