Search results for: time intervals
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18573

Search results for: time intervals

1473 Co-Seismic Deformation Using InSAR Sentinel-1A: Case Study of the 6.5 Mw Pidie Jaya, Aceh, Earthquake

Authors: Jefriza, Habibah Lateh, Saumi Syahreza

Abstract:

The 2016 Mw 6.5 Pidie Jaya earthquake is one of the biggest disasters that has occurred in Aceh within the last five years. This earthquake has caused severe damage to many infrastructures such as schools, hospitals, mosques, and houses in the district of Pidie Jaya and surrounding areas. Earthquakes commonly occur in Aceh Province due to the Aceh-Sumatra is located in the convergent boundaries of the Sunda Plate subducted beneath the Indo-Australian Plate. This convergence is responsible for the intensification of seismicity in this region. The plates are tilted at a speed of 63 mm per year and the right lateral component is accommodated by strike- slip faulting within Sumatra, mainly along the great Sumatran fault. This paper presents preliminary findings of InSAR study aimed at investigating the co-seismic surface deformation pattern in Pidie Jaya, Aceh-Indonesia. Coseismic surface deformation is rapid displacement that occurs at the time of an earthquake. Coseismic displacement mapping is required to study the behavior of seismic faults. InSAR is a powerful tool for measuring Earth surface deformation to a precision of a few centimetres. In this study, two radar images of the same area but at two different times are required to detect changes in the Earth’s surface. The ascending and descending Sentinel-1A (S1A) synthetic aperture radar (SAR) data and Sentinels application platform (SNAP) toolbox were used to generate SAR interferogram image. In order to visualize the InSAR interferometric, the S1A from both master (26 Nov 2016) and slave data-sets (26 Dec 2016) were utilized as the main data source for mapping the coseismic surface deformation. The results show that the fringes of phase difference have appeared in the border region as a result of the movement that was detected with interferometric technique. On the other hand, the dominant fringes pattern also appears near the coastal area, this is consistent with the field investigations two days after the earthquake. However, the study has also limitations of resolution and atmospheric artefacts in SAR interferograms. The atmospheric artefacts are caused by changes in the atmospheric refractive index of the medium, as a result, has limitation to produce coherence image. Low coherence will be affected the result in creating fringes (movement can be detected by fringes). The spatial resolution of the Sentinel satellite has not been sufficient for studying land surface deformation in this area. Further studies will also be investigated using both ALOS and TerraSAR-X. ALOS and TerraSAR-X improved the spatial resolution of SAR satellite.

Keywords: earthquake, InSAR, interferometric, Sentinel-1A

Procedia PDF Downloads 197
1472 A Review of How COVID-19 Has Created an Insider Fraud Pandemic and How to Stop It

Authors: Claire Norman-Maillet

Abstract:

Insider fraud, including its various synonyms such as occupational, employee or internal fraud, is a major financial crime threat whereby an employee defrauds (or attempts to defraud) their current, prospective, or past employer. ‘Employee’ covers anyone employed by the company, including contractors, directors, and part time staff; they may be a solo bad actor or working in collusion with others, whether internal or external. Insider fraud is even more of a concern given the impacts of the Coronavirus pandemic, which has generated multiple opportunities to commit insider fraud. Insider fraud is something that is not necessarily thought of as a significant financial crime threat; the focus of most academics and practitioners has historically been on that of ‘external fraud’ against businesses or entities where an individual or group has no professional ties. Without the face-to-face, ‘over the shoulder’ capabilities of staff being able to keep an eye on their employees, there is a heightened reliance on trust and transparency. With this, naturally, comes an increased risk of insider fraud perpetration. The objective of the research is to better understand how companies are impacted by insider fraud, and therefore how to stop it. This research will make both an original contribution and stimulate debate within the financial crime field. The financial crime landscape is never static – criminals are always creating new ways to perpetrate financial crime, and new legislation and regulations are implemented as attempts to strengthen controls, in addition to businesses doing what they can internally to detect and prevent it. By focusing on insider fraud specifically, the research will be more specific and will be of greater use to those in the field. To achieve the aims of the research, semi-structured interviews were conducted with 22 individuals who either work in financial services and deal with insider fraud or work within insider fraud perpetration in a recruitment or advisory capacity. This was to enable the sourcing of information from a wide range of individuals in a setting where they were able to elaborate on their answers. The principal recruitment strategy was engaging with the researcher’s network on LinkedIn. The interviews were then transcribed and analysed thematically. Main findings in the research suggest that insider fraud has been ignored owing to the denial of accepting the possibility that colleagues would defraud their employer. Whilst Coronavirus has led to a significant rise in insider fraud, this type of crime has been a major risk to businesses since their inception, however have never been given the financial or strategic backing required to be mitigated, until it's too late. Furthermore, Coronavirus should have led to companies tightening their access rights, controls and policies to mitigate the insider fraud risk. However, in most cases this has not happened. The research concludes that insider fraud needs to be given a platform upon which to be recognised as a threat to any company and given the same level of weighting and attention by Executive Committees and Boards as other types of economic crime.

Keywords: fraud, insider fraud, economic crime, coronavirus, Covid-19

Procedia PDF Downloads 70
1471 Motivation of Doctors and its Impact on the Quality of Working Life

Authors: E. V. Fakhrutdinova, K. R. Maksimova, P. B. Chursin

Abstract:

At the present stage of the society progress the health care is an integral part of both the economic system and social, while in the second case the medicine is a major component of a number of basic and necessary social programs. Since the foundation of the health system are highly qualified health professionals, it is logical proposition that increase of doctor`s professionalism improves the effectiveness of the system as a whole. Professionalism of the doctor is a collection of many components, essential role played by such personal-psychological factors as honesty, willingness and desire to help people, and motivation. A number of researchers consider motivation as an expression of basic human needs that have passed through the “filter” which is a worldview and values learned in the process of socialization by the individual, to commit certain actions designed to achieve the expected result. From this point of view a number of researchers propose the following classification of highly skilled employee’s needs: 1. the need for confirmation the competence (setting goals that meet the professionalism and receipt of positive emotions in their decision), 2. The need for independence (the ability to make their own choices in contentious situations arising in the process carry out specialist functions), 3. The need for ownership (in the case of health care workers, to the profession and accordingly, high in the eyes of the public status of the doctor). Nevertheless, it is important to understand that in a market economy a significant motivator for physicians (both legal and natural persons) is to maximize its own profits. In the case of health professionals duality motivational structure creates an additional contrast, as in the public mind the image of the ideal physician; usually a altruistically minded person thinking is not primarily about their own benefit, and to assist others. In this context, the question of the real motivation of health workers deserves special attention. The survey conducted by the American researcher Harrison Terni for the magazine "Med Tech" in 2010 revealed the opinion of more than 200 medical students starting courses, and the primary motivation in a profession choice is "desire to help people", only 15% said that they want become a doctor, "to earn a lot". From the point of view of most of the classical theories of motivation this trend can be called positive, as intangible incentives are more effective. However, it is likely that over time the opinion of the respondents may change in the direction of mercantile motives. Thus, it is logical to assume that well-designed system of motivation of doctor`s labor should be based on motivational foundations laid during training in higher education.

Keywords: motivation, quality of working life, health system, personal-psychological factors, motivational structure

Procedia PDF Downloads 360
1470 Cloud Based Supply Chain Traceability

Authors: Kedar J. Mahadeshwar

Abstract:

Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.

Keywords: cloud, pharmaceutical, supply chain, tracking

Procedia PDF Downloads 529
1469 A 20 Year Comparison of Australian Childhood Bicycle Injuries – Have We Made a Difference?

Authors: Bronwyn Griffin, Caroline Acton, Tona Gillen, Roy Kimble

Abstract:

Background: Bicycle riding is a common recreational activity enjoyed by many children throughout Australia that has been associated with the usual caveat of benefits related to exercise and recreation. Given Australia was the first country in the world to introduce cyclist helmet laws in 1991, very few publications have reviewed paediatric cycling injuries (fatal or non-fatal) since. Objectives: To identify trends in children (0-16 years) who required admission for greater than 24 hours following a bicycle-related injury (fatal and non-fatal) in Queensland. Further, to discuss changes that have occurred in paediatric cycling injury trends in Queensland since a prominent local study/publication in 1995. This paper aims to establish evidence to inform interventions promoting safer riding to parents, children and communities. Methods: Data on paediatric (0-16 years) cycling injuries in Queensland resulting in hospital admission more than 24 hours across three tertiary paediatric hospitals in Brisbane between November 2008-June 2015 was compiled by the Paediatric Trauma Data Registry for non-fatal injuries. The Child Death Review Team at the Queensland Families and Childhood Commission provided data on fatalities in children <17years from (June 2004 –June 2015). Comparing trends to a local study published in 1995 Results: Between 2008-2015 there were 197 patients admitted for greater than 24 hours following a cycling injury. The median age was 11 years, with males more frequently involved (n=139, 87%) compared to females. Mean length of stay was three days, with 47 (28%) children admitted to PICU, location of injury was most often the street (n=63, 37%). Between 2004 –2015 there were 15 fatalities (Incidence rate 0.25/100,000); all were male, 14/15 occurred on the street, with eight stated to have not been wearing a helmet, 11/15 children came from the least advantaged socio-economic group (SEIFA) compared to a local publication in 1995, finding of 94 fatalities between (1981-1992). Conclusions: There has been a notable decrease in incidence of fatalities between the two time periods with incidence rates dropping from 1.75-0.25/100,000. More statistics need to be run to ascertain if this is a true reduction or perhaps a decrease in children riding bicycles. Injuries that occur on the street that come in contact with a car remain of serious concern. The purpose of this paper is not to discourage bicycle riding among child and adolescent populations, rather, inform parents and the wider community about the risks associated with cycling in order to reduce injuries associated with this sport, whilst promoting safe cycling.

Keywords: paediatric, cycling, trauma, prevention, emergency

Procedia PDF Downloads 250
1468 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator

Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur

Abstract:

Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.

Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature

Procedia PDF Downloads 111
1467 Molecular Characterization of Listeria monocytogenes from Fresh Fish and Fish Products

Authors: Beata Lachtara, Renata Szewczyk, Katarzyna Bielinska, Kinga Wieczorek, Jacek Osek

Abstract:

Listeria monocytogenes is an important human and animal pathogen that causes foodborne outbreaks. The bacteria may be present in different types of food: cheese, raw vegetables, sliced meat products and vacuum-packed sausages, poultry, meat, fish. The most common method, which has been used for the investigation of genetic diversity of L. monocytogenes, is PFGE. This technique is reliable and reproducible and established as gold standard for typing of L. monocytogenes. The aim of the study was characterization by molecular serotyping and PFGE analysis of L. monocytogenes strains isolated from fresh fish and fish products in Poland. A total of 301 samples, including fresh fish (n = 129) and fish products (n = 172) were, collected between January 2014 and March 2016. The bacteria were detected using the ISO 11290-1 standard method. Molecular serotyping was performed with PCR. The isolates were tested with the PFGE method according to the protocol developed by the European Union Reference Laboratory for L. monocytogenes with some modifications. Based on the PFGE profiles, two dendrograms were generated for strains digested separately with two restriction enzymes: AscI and ApaI. Analysis of the fingerprint profiles was performed using Bionumerics software version 6.6 (Applied Maths, Belgium). The 95% of similarity was applied to differentiate the PFGE pulsotypes. The study revealed that 57 of 301 (18.9%) samples were positive for L. monocytogenes. The bacteria were identified in 29 (50.9%) ready-to-eat fish products and in 28 (49.1%) fresh fish. It was found that 40 (70.2%) strains were of serotype 1/2a, 14 (24.6%) 1/2b, two (4.3%) 4b and one (1.8%) 1/2c. Serotypes 1/2a, 1/2b, and 4b were presented with the same frequency in both categories of food, whereas serotype 1/2c was detected only in fresh fish. The PFGE analysis with AscI demonstrated 43 different pulsotypes; among them 33 (76.7%) were represented by only one strain. The remaining 10 profiles contained more than one isolate. Among them 8 pulsotypes comprised of two L. monocytogenes isolates, one profile of three isolates and one restriction type of 5 strains. In case of ApaI typing, the PFGE analysis showed 27 different pulsotypes including 17 (63.0%) types represented by only one strain. Ten (37.0%) clusters contained more than one strain among which four profiles covered two strains; three had three isolates, one with five strains, one with eight strains and one with ten isolates. It was observed that the isolates assigned to the same PFGE type were usually of the same serotype (1/2a or 1/2b). The majority of the clusters had strains of both sources (fresh fish and fish products) isolated at different time. Most of the strains grouped in one cluster of the AscI restriction was assigned to the same groups in ApaI investigation. In conclusion, PFGE used in the study showed a high genetic diversity among L. monocytogenes. The strains were grouped into varied clonal clusters, which may suggest different sources of contamination. The results demonstrated that 1/2a serotype was the most common among isolates from fresh fish and fish products in Poland.

Keywords: Listeria monocytogenes, molecular characteristic, PFGE, serotyping

Procedia PDF Downloads 290
1466 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 239
1465 Lead Removal From Ex- Mining Pond Water by Electrocoagulation: Kinetics, Isotherm, and Dynamic Studies

Authors: Kalu Uka Orji, Nasiman Sapari, Khamaruzaman W. Yusof

Abstract:

Exposure of galena (PbS), tealite (PbSnS2), and other associated minerals during mining activities release lead (Pb) and other heavy metals into the mining water through oxidation and dissolution. Heavy metal pollution has become an environmental challenge. Lead, for instance, can cause toxic effects to human health, including brain damage. Ex-mining pond water was reported to contain lead as high as 69.46 mg/L. Conventional treatment does not easily remove lead from water. A promising and emerging treatment technology for lead removal is the application of the electrocoagulation (EC) process. However, some of the problems associated with EC are systematic reactor design, selection of maximum EC operating parameters, scale-up, among others. This study investigated an EC process for the removal of lead from synthetic ex-mining pond water using a batch reactor and Fe electrodes. The effects of various operating parameters on lead removal efficiency were examined. The results obtained indicated that the maximum removal efficiency of 98.6% was achieved at an initial PH of 9, the current density of 15mA/cm2, electrode spacing of 0.3cm, treatment time of 60 minutes, Liquid Motion of Magnetic Stirring (LM-MS), and electrode arrangement = BP-S. The above experimental data were further modeled and optimized using a 2-Level 4-Factor Full Factorial design, a Response Surface Methodology (RSM). The four factors optimized were the current density, electrode spacing, electrode arrangements, and Liquid Motion Driving Mode (LM). Based on the regression model and the analysis of variance (ANOVA) at 0.01%, the results showed that an increase in current density and LM-MS increased the removal efficiency while the reverse was the case for electrode spacing. The model predicted the optimal lead removal efficiency of 99.962% with an electrode spacing of 0.38 cm alongside others. Applying the predicted parameters, the lead removal efficiency of 100% was actualized. The electrode and energy consumptions were 0.192kg/m3 and 2.56 kWh/m3 respectively. Meanwhile, the adsorption kinetic studies indicated that the overall lead adsorption system belongs to the pseudo-second-order kinetic model. The adsorption dynamics were also random, spontaneous, and endothermic. The higher temperature of the process enhances adsorption capacity. Furthermore, the adsorption isotherm fitted the Freundlish model more than the Langmuir model; describing the adsorption on a heterogeneous surface and showed good adsorption efficiency by the Fe electrodes. Adsorption of Pb2+ onto the Fe electrodes was a complex reaction, involving more than one mechanism. The overall results proved that EC is an efficient technique for lead removal from synthetic mining pond water. The findings of this study would have application in the scale-up of EC reactor and in the design of water treatment plants for feed-water sources that contain lead using the electrocoagulation method.

Keywords: ex-mining water, electrocoagulation, lead, adsorption kinetics

Procedia PDF Downloads 149
1464 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes

Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão

Abstract:

The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.

Keywords: eddy current separation, particle size, numerical simulation, metal recovery

Procedia PDF Downloads 91
1463 Monitoring Soil Moisture Dynamic in Root Zone System of Argania spinosa Using Electrical Resistivity Imaging

Authors: F. Ainlhout, S. Boutaleb, M. C. Diaz-Barradas, M. Zunzunegui

Abstract:

Argania spinosa is an endemic tree of the southwest of Morocco, occupying 828,000 Ha, distributed mainly between Mediterranean vegetation and the desert. This tree can grow in extremely arid regions in Morocco, where annual rainfall ranges between 100-300 mm where no other tree species can live. It has been designated as a UNESCO Biosphere reserve since 1998. Argania tree is of great importance in human and animal feeding of rural population as well as for oil production, it is considered as a multi-usage tree. Admine forest located in the suburbs of Agadir city, 5 km inland, was selected to conduct this work. The aim of the study was to investigate the temporal variation in root-zone moisture dynamic in response to variation in climatic conditions and vegetation water uptake, using a geophysical technique called Electrical resistivity imaging (ERI). This technique discriminates resistive woody roots, dry and moisture soil. Time-dependent measurements (from April till July) of resistivity sections were performed along the surface transect (94 m Length) at 2 m fixed electrode spacing. Transect included eight Argan trees. The interactions between the tree and soil moisture were estimated by following the tree water status variations accompanying the soil moisture deficit. For that purpose we measured midday leaf water potential and relative water content during each sampling day, and for the eight trees. The first results showed that ERI can be used to accurately quantify the spatiotemporal distribution of root-zone moisture content and woody root. The section obtained shows three different layers: middle conductive one (moistured); a moderately resistive layer corresponding to relatively dry soil (calcareous formation with intercalation of marly strata) on top, this layer is interspersed by very resistant layer corresponding to woody roots. Below the conductive layer, we find the moderately resistive layer. We note that throughout the experiment, there was a continuous decrease in soil moisture at the different layers. With the ERI, we can clearly estimate the depth of the woody roots, which does not exceed 4 meters. In previous work on the same species, analyzing the δ18O in water of xylem and in the range of possible water sources, we argued that rain is the main water source in winter and spring, but not in summer, trees are not exploiting deep water from the aquifer as the popular assessment, instead of this they are using soil water at few meter depth. The results of the present work confirm the idea that the roots of Argania spinosa are not growing very deep.

Keywords: Argania spinosa, electrical resistivity imaging, root system, soil moisture

Procedia PDF Downloads 329
1462 The Potential of Edaphic Algae for Bioremediation of the Diesel-Contaminated Soil

Authors: C. J. Tien, C. S. Chen, S. F. Huang, Z. X. Wang

Abstract:

Algae in soil ecosystems can produce organic matters and oxygen by photosynthesis. Heterocyst-forming cyanobacteria can fix nitrogen to increase soil nitrogen contents. Secretion of mucilage by some algae increases the soil water content and soil aggregation. These actions will improve soil quality and fertility, and further increase abundance and diversity of soil microorganisms. In addition, some mixotrophic and heterotrophic algae are able to degrade petroleum hydrocarbons. Therefore, the objectives of this study were to analyze the effects of algal addition on the degradation of total petroleum hydrocarbons (TPH), diversity and activity of bacteria and algae in the diesel-contaminated soil under different nutrient contents and frequency of plowing and irrigation in order to assess the potential bioremediation technique using edaphic algae. The known amount of diesel was added into the farmland soil. This diesel-contaminated soil was subject to five settings, experiment-1 with algal addition by plowing and irrigation every two weeks, experiment-2 with algal addition by plowing and irrigation every four weeks, experiment-3 with algal and nutrient addition by plowing and irrigation every two weeks, experiment-4 with algal and nutrient addition by plowing and irrigation every four weeks, and the control without algal addition. Soil samples were taken every two weeks to analyze TPH concentrations, diversity of bacteria and algae, and catabolic genes encoding functional degrading enzymes. The results show that the TPH removal rates of five settings after the two-month experimental period were in the order: experiment-2 > expermient-4 > experiment-3 > experiment-1 > control. It indicated that algal addition enhanced the degradation of TPH in the diesel-contaminated soil, but not for nutrient addition. Plowing and irrigation every four weeks resulted in more TPH removal than that every two weeks. The banding patterns of denaturing gradient gel electrophoresis (DGGE) revealed an increase in diversity of bacteria and algae after algal addition. Three petroleum hydrocarbon-degrading algae (Anabaena sp., Oscillatoria sp. and Nostoc sp.) and two added algal strains (Leptolyngbya sp. and Synechococcus sp.) were sequenced from DGGE prominent bands. The four hydrocarbon-degrading bacteria Gordonia sp., Mycobacterium sp., Rodococcus sp. and Alcanivorax sp. were abundant in the treated soils. These results suggested that growth of indigenous bacteria and algae were improved after adding edaphic algae. Real-time polymerase chain reaction results showed that relative amounts of four catabolic genes encoding catechol 2, 3-dioxygenase, toluene monooxygenase, xylene monooxygenase and phenol monooxygenase were appeared and expressed in the treated soil. The addition of algae increased the expression of these genes at the end of experiments to biodegrade petroleum hydrocarbons. This study demonstrated that edaphic algae were suitable biomaterials for bioremediating diesel-contaminated soils with plowing and irrigation every four weeks.

Keywords: catabolic gene, diesel, diversity, edaphic algae

Procedia PDF Downloads 280
1461 Hybrid Manufacturing System to Produce 3D Structures for Osteochondral Tissue Regeneration

Authors: Pedro G. Morouço

Abstract:

One utmost challenge in Tissue Engineering is the production of 3D constructs capable of mimicking the functional hierarchy of native tissues. This is well stated for osteochondral tissue due to the complex mechanical functional unit based on the junction of articular cartilage and bone. Thus, the aim of the present study was to develop a new additive manufacturing system coupling micro-extrusion with hydrogels printing. An integrated system was developed with 2 main features: (i) the printing of up to three distinct hydrogels; (ii) in coordination with the printing of a thermoplastic structural support. The hydrogel printing module was projected with a ‘revolver-like’ system, where the hydrogel selection was made by a rotating mechanism. The hydrogel deposition was then controlled by pressured air input. The use of specific components approved for medical use was incorporated in the material dispensing system (Nordson EDF Optimum® fluid dispensing system). The thermoplastic extrusion modulus enabled the control of required extrusion temperature through electric resistances in the polymer reservoir and the extrusion system. After testing and upgrades, a hydrogel modulus with 3 syringes (3cm3 capacity each), with a pressure range of 0-2.5bar, a rotational speed of 0-5rpm, and working with needles from 200-800µm was obtained. This modulus was successfully coupled to the extrusion system that presented a temperature up to 300˚C, a pressure range of 0-12bar, and working with nozzles from 200-500µm. The applied motor could provide a velocity range 0-2000mm/min. Although, there are distinct printing requirements for hydrogels and polymers, the novel system could develop hybrid scaffolds, combining the 2 moduli. The morphological analysis showed high reliability (n=5) between the theoretical and obtained filament and pore size (350µm and 300µm vs. 342±4µm and 302±3µm, p>0.05, respectively) of the polymer; and multi-material 3D constructs were successfully obtained. Human tissues present very distinct and complex structures regarding their mechanical properties, organization, composition and dimensions. For osteochondral regenerative medicine, a multiphasic scaffold is required as subchondral bone and overlying cartilage must regenerate at the same time. Thus, a scaffold with 3 layers (bone, intermediate and cartilage parts) can be a promising approach. The developed system may give a suitable solution to construct those hybrid scaffolds with enhanced properties. The present novel system is a step-forward regarding osteochondral tissue engineering due to its ability to generate layered mechanically stable implants through the double-printing of hydrogels with thermoplastics.

Keywords: 3D bioprinting, bone regeneration, cartilage regeneration, regenerative medicine, tissue engineering

Procedia PDF Downloads 167
1460 The Social Structuring of Mate Selection: Assortative Marriage Patterns in the Israeli Jewish Population

Authors: Naava Dihi, Jon Anson

Abstract:

Love, so it appears, is not socially blind. We show that partner selection is socially constrained, and the freedom to choose is limited by at least two major factors or capitals: on the one hand, material resources and education, locating the partners on a scale of personal achievement and economic independence. On the other, the partners' ascriptive belonging to particular ethnic, or origin, groups, differentiated by the groups' social prestige, as well as by their culture, history and even physical characteristics. However, the relative importance of achievement and ascriptive factors, as well as the overlap between them, varies from society to society, depending on the society's structure and the factors shaping it. Israeli social structure has been shaped by the waves of new immigrants who arrived over the years. The timing of their arrival, their patterns of physical settlement and their occupational inclusion or exclusion have together created a mosaic of social groups whose principal common feature has been the country of origin from which they arrived. The analysis of marriage patterns helps illuminate the social meanings of the groups and their borders. To the extent that ethnic group membership has meaning for individuals and influences their life choices, the ascriptive factor will gain in importance relative to the achievement factor in their choice of marriage partner. In this research, we examine Jewish Israeli marriage patterns by looking at the marriage choices of 5,041 women aged 15 to 49 who were single at the census in 1983, and who were married at the time of the 1995 census, 12 years later. The database for this study was a file linking respondents from the 1983 and the 1995 censuses. In both cases, 5 percent of household were randomly chosen, so that our sample includes about 4 percent of women in Israel in 1983. We present three basic analyses: (1) Who was still single in 1983, using personal and household data from the 1983 census (binomial model), (2) Who married between 1983 and a1995, using personal and household data from the 1983 census (binomial model), (3) What were the personal characteristics of the womens’ partners in 1995, using data from the 1995 census (loglinear model). We show (i) that material and cultural capital both operate to delay marriage and to increase the probability of remaining single; and (ii) while there is a clear association between ethnic group membership and education, endogamy and homogamy both operate as separate forces which constraint (but do not determine) the choice of marriage partner, and thus both serve to reproduce the current pattern of relationships, as well as identifying patterns of proximity and distance between the different groups.

Keywords: Israel, nuptiality, ascription, achievement

Procedia PDF Downloads 117
1459 Design, Construction, Validation And Use Of A Novel Portable Fire Effluent Sampling Analyser

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Current large scale fire tests focus on flammability and heat release measurements. Smoke toxicity isn’t considered despite it being a leading cause of death and injury in unwanted fires. A key reason could be that the practical difficulties associated with quantifying individual toxic components present in a fire effluent often require specialist equipment and expertise. Fire effluent contains a mixture of unreactive and reactive gases, water, organic vapours and particulate matter, which interact with each other. This interferes with the operation of the analytical instrumentation and must be removed without changing the concentration of the target analyte. To mitigate the need for expensive equipment and time-consuming analysis, a portable gas analysis system was designed, constructed and tested for use in large-scale fire tests as a simpler and more robust alternative to online FTIR measurements. The novel equipment aimed to be easily portable and able to run on battery or mains electricity; be able to be calibrated at the test site; be capable of quantifying CO, CO2, O2, HCN, HBr, HCl, NOx and SO2 accurately and reliably; be capable of independent data logging; be capable of automated switchover of 7 bubblers; be able to withstand fire effluents; be simple to operate; allow individual bubbler times to be pre-set; be capable of being controlled remotely. To test the analysers functionality, it was used alongside the ISO/TS 19700 Steady State Tube Furnace (SSTF). A series of tests were conducted to assess the validity of the box analyser measurements and the data logging abilities of the apparatus. PMMA and PA 6.6 were used to assess the validity of the box analyser measurements. The data obtained from the bench-scale assessments showed excellent agreement. Following this, the portable analyser was used to monitor gas concentrations during large-scale testing using the ISO 9705 room corner test. The analyser was set up, calibrated and set to record smoke toxicity measurements in the doorway of the test room. The analyser was successful in operating without manual interference and successfully recorded data for 12 of the 12 tests conducted in the ISO room tests. At the end of each test, the analyser created a data file (formatted as .csv) containing the measured gas concentrations throughout the test, which do not require specialist knowledge to interpret. This validated the portable analyser’s ability to monitor fire effluent without operator intervention on both a bench and large-scale. The portable analyser is a validated and significantly more practical alternative to FTIR, proven to work for large-scale fire testing for quantification of smoke toxicity. The analyser is a cheaper, more accessible option to assess smoke toxicity, mitigating the need for expensive equipment and specialist operators.

Keywords: smoke toxicity, large-scale tests, iso 9705, analyser, novel equipment

Procedia PDF Downloads 79
1458 Association between TNF-α and Its Receptor TNFRSF1B Polymorphism with Pulmonary Tuberculosis in Tomsk, Russia Federation

Authors: K. A. Gladkova, N. P. Babushkina, E. Y. Bragina

Abstract:

Purpose: Tuberculosis (TB), caused by Mycobacterium tuberculosis, is one of the major public health problems worldwide. It is clear that the immune response to M. tuberculosis infection is a relationship between inflammatory and anti-inflammatory responses in which Tumour Necrosis Factor-α (TNF-α) plays key roles as a pro-inflammatory cytokine. TNF-α involved in various cell immune responses via binding to its two types of membrane-bound receptors, TNFRSF1A and TNFRSF1B. Importantly, some variants of the TNFRSF1B gene have been considered as possible markers of host susceptibility to TB. However, the possible impact of such TNF-α and its receptor genes polymorphism on TB cases in Tomsk is missing. Thus, the purpose of our study was to investigate polymorphism of TNF-α (rs1800629) and its receptor TNFRSF1B (rs652625 and rs525891) genes in population of Tomsk and to evaluate their possible association with the development of pulmonary TB. Materials and Methods: The population distribution features of genes polymorphisms were investigated and made case-control study based on group of people from Tomsk. Human blood was collected during routine patients examination at Tomsk Regional TB Dispensary. Altogether, 234 TB-positive patients (80 women, 154 men, average age is 28 years old) and 205 health-controls (153 women, 52 men, average age is 47 years old) were investigated. DNA was extracted from blood plasma by phenol-chloroform method. Genotyping was carried out by a single-nucleotide-specific real-time PCR assay. Results: First, interpopulational comparison was carried out between healthy individuals from Tomsk and available data from the 1000 Genomes project. It was found that polymorphism rs1800629 region demonstrated that Tomsk population was significantly different from Japanese (P = 0.0007), but it was similar with the following Europeans subpopulations: Italians (P = 0.052), Finns (P = 0.124) and British (P = 0.910). Polymorphism rs525891 clear demonstrated that group from Tomsk was significantly different from population of South Africa (P = 0.019). However, rs652625 demonstrated significant differences from Asian population: Chinese (P = 0.03) and Japanese (P = 0.004). Next, we have compared healthy individuals versus patients with TB. It was detected that no association between rs1800629, rs652625 polymorphisms, and positive TB cases. Importantly, AT genotype of polymorphism rs525891 was significantly associated with resistance to TB (odds ratio (OR) = 0.61; 95% confidence interval (CI): 0.41-0.9; P < 0.05). Conclusion: To the best of our knowledge, the polymorphism of TNFRSF1B (rs525891) was associated with TB, while genotype AT is protective [OR = 0.61] in Tomsk population. In contrast, no significant correlation was detected between polymorphism TNF-α (rs1800629) and TNFRSF1B (rs652625) genes and alveolar TB cases among population of Tomsk. In conclusion, our data expands the molecular particularities associated with TB. The study was supported by the grant of the Russia for Basic Research #15-04-05852.

Keywords: polymorphism, tuberculosis, TNF-α, TNFRSF1B gene

Procedia PDF Downloads 181
1457 Combained Cultivation of Endemic Strains of Lactic Acid Bacteria and Yeast with Antimicrobial Properties

Authors: A. M. Isakhanyan, F. N. Tkhruni, N. N. Yakimovich, Z. I. Kuvaeva, T. V. Khachatryan

Abstract:

Introduction: At present, the simbiotics based on different genera and species of lactic acid bacteria (LAB) and yeasts are used. One of the basic properties of probiotics is presence of antimicrobial activity and therefore selection of LAB and yeast strains for their co-cultivation with the aim of increasing of the activity is topical. Since probiotic yeast and bacteria have different mechanisms of action, natural synergies between species, higher viability and increasing of antimicrobial activity might be expected from mixing both types of probiotics. Endemic strains of LAB Enterococcus faecium БТK-64, Lactobaccilus plantarum БТK-66, Pediococcus pentosus БТK-28, Lactobacillus rhamnosus БТK-109 and Kluyveromyces lactis БТX-412, Saccharomycopsis sp. БТX- 151 strains of yeast, with probiotic properties and hight antimicrobial activity, were selected. Strains are deposited in "Microbial Depository Center" (MDC) SPC "Armbiotechnology". Methods: LAB and yeast strains were isolated from different dairy products from rural households of Armenia. The genotyping by 16S rRNA sequencing for LAB and 26S RNA sequencing for yeast were used. Combined cultivation of LAB and yeast strains was carried out in the nutrient media on the basis of milk whey, in anaerobic conditions (without shaker, in a thermostat at 37oC, 48 hours). The complex preparations were obtained by purification of cell free culture broth (CFC) broth by the combination of ion-exchange chromatography and gel filtration methods. The spot-on-lawn method was applied for determination of antimicrobial activity and expressed in arbitrary units (AU/ml). Results. The obtained data showed that at the combined growth of bacteria and yeasts, the cultivation conditions (medium composition, time of growth, genera of LAB and yeasts) affected the display of antimicrobial activity. Purification of CFC broth allowed obtaining partially purified antimicrobial complex preparation which contains metabiotics from both bacteria and yeast. The complex preparation inhibited the growth of pathogenic and conditionally pathogenic bacteria, isolated from various internal organs from diseased animals and poultry with greater efficiency than the preparations derived individually alone from yeast and LAB strains. Discussion. Thus, our data shown perspectives of creation of a new class of antimicrobial preparations on the basis of combined cultivation of endemic strains of LAB and yeast. Obtained results suggest the prospect of use of the partially purified complex preparations instead antibiotics in the agriculture and for food safety. Acknowledgments: This work was supported by the RA MES State Committee of Science and Belarus National Foundation for Basic Research in the frames of the joint Armenian - Belarusian joint research project 13РБ-064.

Keywords: co-cultivation, antimicrobial activity, biosafety, metabiotics, lactic acid bacteria, yeast

Procedia PDF Downloads 341
1456 Improving Health Workers’ Well-Being in Cittadella Hospital (Province of Padua), Italy

Authors: Emanuela Zilli, Suana Tikvina, Davide Bonaldo, Monica Varotto, Scilla Rizzardi, Barbara Ruzzante, Raffaele Napolitano, Stefano Bevilacqua, Antonella Ruffatto

Abstract:

A healthy workplace increases productivity, creativity and decreases absenteeism and turnover. It also contributes to creating a more secure work environment with fewer risks of violence. In the past 3 years, the healthcare system has suffered the psychological, economic and social consequences of the COVID-19 pandemic. On the other hand, the healthcare staff reductions determine high levels of work-related stress that are often unsustainable. The Hospital of Cittadella (in the province of Padua) has 400 beds and serves a territory of 300,000 inhabitants. The hospital itself counts 1.250 healthcare employees (healthcare professionals). This year, the Medical Board of Directors has requested additional staff; however, the economic situation of Italy can not sustain additional hires. At the same time, we have initiated projects that aim to increase well-being, decrease stress and encourage activities that promote self-care. One of the projects that the hospital has organized is the psychomotor practice. It is held by therapists and trainers who operate according to the traditional method. According to the literature, the psychomotor practice is specifically intended for the staff operating in the Intensive Care Unit, Emergency Department and Pneumology Ward. The project consisted of one session of 45 minutes a week for 3 months. This method brings focus to controlled breathing, posture, muscle work and movement that help manage stress and fatigue, creating a more mindful and sustainable lifestyle. In addition, a Qigong course was held every two weeks for 5 months. It is an ancient Chinese practice designed to optimize the energy within the body, reducing stress levels and increasing general well-being. Finally, Tibetan singing crystal bowls sessions, held by a music therapist, consisted of monthly guided meditation sessions using the sounds of the crystal bowls. Sound therapy uses the vibrations created from the crystal bowls to balance the vibrations within the body to promote relaxation. In conclusion, well-being and organizational performance are closely related to each other. It is crucial for any organization to encourage and maintain better physical and mental health of the healthcare staff as it directly affects productivity and, consequently, user satisfaction of the services provided.

Keywords: health promotion, healthcare workers management, Weel being and organizational performance, Psychomotor practice

Procedia PDF Downloads 70
1455 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption

Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu

Abstract:

By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.

Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture

Procedia PDF Downloads 378
1454 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh

Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila

Abstract:

Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.

Keywords: data culture, data-driven organization, data mesh, data quality for business success

Procedia PDF Downloads 137
1453 Benefits of Shaping a Balance on Environmental and Economic Sustainability for Population Health

Authors: Edna Negron-Martinez

Abstract:

Our time's global challenges and trends —like those associated with climate change, demographics displacements, growing health inequalities, and increasing burden of diseases— have complex connections to the determinants of health. Information on the burden of disease causes and prevention is fundamental for public health actions, like preparedness and responses for disasters, and recovery resources after the event. For instance, there is an increasing consensus about key findings of the effects and connections of the global burden of disease, as it generates substantial healthcare costs, consumes essential resources and prevents the attainment of optimal health and well-being. The goal of this research endeavor is to promote a comprehensive understanding of the connections between social, environmental, and economic influences on health. These connections are illustrated by pulling from clearly the core curriculum of multidisciplinary areas —as urban design, energy, housing, and economy— as well as in the health system itself. A systematic review of primary and secondary data included a variety of issues as global health, natural disasters, and critical pollution impacts on people's health and the ecosystems. Environmental health is challenged by the unsustainable consumption patterns and the resulting contaminants that abound in many cities and urban settings around the world. Poverty, inadequate housing, and poor health are usually linked. The house is a primary environmental health context for any individual and especially for more vulnerable groups; such as children, older adults and those who are sick. Nevertheless, very few countries show strong decoupling of environmental degradation from economic growth, as indicated by a recent 2017 Report of the World Bank. Worth noting, the environmental fraction of the global burden of disease in a 2016 World Health Organization (WHO) report estimated that 12.6 million global deaths, accounting for 23% (95% CI: 13-34%) of all deaths were attributable to the environment. Among the environmental contaminants include heavy metals, noise pollution, light pollution, and urban sprawl. Those key findings make a call to the significance to urgently adopt in a global scale the United Nations post-2015 Sustainable Development Goals (SDGs). The SDGs address the social, environmental, and economic factors that influence health and health inequalities, advising how these sectors, in turn, benefit from a healthy population. Consequently, more actions are necessary from an inter-sectoral and systemic paradigm to enforce an integrated sustainability policy implementation aimed at the environmental, social, and economic determinants of health.

Keywords: building capacity for workforce development, ecological and environmental health effects of pollution, public health education, sustainability

Procedia PDF Downloads 109
1452 Incidence and Risk Factors of Traumatic Lumbar Puncture in Newborns in a Tertiary Care Hospital

Authors: Heena Dabas, Anju Paul, Suman Chaurasia, Ramesh Agarwal, M. Jeeva Sankar, Anurag Bajpai, Manju Saksena

Abstract:

Background: Traumatic lumbar puncture (LP) is a common occurrence and causes substantial diagnostic ambiguity. There is paucity of data regarding its epidemiology. Objective: To assess the incidence and risk factors of traumatic LP in newborns. Design/Methods: In a prospective cohort study, all inborn neonates admitted in NICU and planned to undergo LP for a clinical indication of sepsis were included. Neonates with diagnosed intraventricular hemorrhage (IVH) of grade III and IV were excluded. The LP was done by operator - often a fellow or resident assisted by bedside nurse. The unit has policy of not routinely using any sedation/analgesia during the procedure. LP is done by 26 G and 0.5-inch-long hypodermic needle inserted in third or fourth lumbar space while the infant is in lateral position. The infants were monitored clinically and by continuous measurement of vital parameters using multipara monitor during the procedure. The occurrence of traumatic tap along with CSF parameters and other operator and assistant characteristics were recorded at the time of procedure. Traumatic tap was defined as presence of visible blood or more than 500 red blood cells on microscopic examination. Microscopic trauma was defined when CSF is not having visible blood but numerous RBCs. The institutional ethics committee approved the study protocol. A written informed consent from the parents and the health care providers involved was obtained. Neonates were followed up till discharge/death and final diagnosis was assigned along with treating team. Results: A total of 362 (21%) neonates out of 1726 born at the hospital were admitted during the study period (July 2016 to January, 2017). Among these neonates, 97 (26.7%) were suspected of sepsis. A total of 54 neonates were enrolled who met the eligibility criteria and parents consented to participate in the study. The mean (SD) birthweight was 1536 (732) grams and gestational age 32.0 (4.0) weeks. All LPs were indicated for late onset sepsis at the median (IQR) age of 12 (5-39) days. The traumatic LP occurred in 19 neonates (35.1%; 95% C.I 22.6% to 49.3%). Frank blood was observed in 7 (36.8%) and in the remaining, 12(63.1%) CSF was detected to have microscopic trauma. The preliminary risk factor analysis including birth weight, gestational age and operator/assistant and other characteristics did not demonstrate clinically relevant predictors. Conclusion: A significant number of neonates requiring lumbar puncture in our study had high incidence of traumatic tap. We were not able to identify modifiable risk factors. There is a need to understand the reasons and further reduce this issue for improving management in NICUs.

Keywords: incidence, newborn, traumatic, lumbar puncture

Procedia PDF Downloads 299
1451 The Concept of Customary International Law. Redefining The Formation Requirements of Customary International Law Based on The Rules-And-Principles-Model of Robert Alexy

Authors: Marlene Anzenberger

Abstract:

The emergence of customary international law has always been controversial. Even with the two fundamental elements of origin, longa consuetudo and opinio juris the process of origin is highly unclear. It is uncertain how much time must pass, how many subjects must act, how many actions must be taken and how strong the opinio iuris must be in order for customary international law to emerge. The most appropriate solutions are based on sliding scales. Every aspect of the emergence of customary international law is up for debate, depending on the specific circumstances. The given approach is to rationalise this process by constructing an internal line of justification for all the arguments developed in the literature and used in the external justification process. This requires defining the elements of the justification process as formal principles. Such an approach is a milestone considering the fact that formal principles are highly questioned nowadays and - if they are accepted at all - are mostly used in relation to competences. Furthermore, the application of formal principles needs to be scrutinised and extended. In the national context (eg fundamental rights), principles have so far only been able to collide. However, their optimisation character also allows for other applications, for example cooperation instead of collision. Taking these aspects into account, a rational origination scheme is to be developed that is based on Robert Alexy's weight formula. First, one has to examine the individual components of the two fundamental elements of emergence and establish whether these are all-or-nothing requirements (rules) or partially fulfillable parameters (principles) and to what extent the gradually fulfillable parameters are definitely of necessity in every case. Second, one has to look at the previous research on formal principles, which is based in particular on Matthias Klatt's theory stating that formal principles are equivalent to competences and occur only in this context. However, the outcome of the paper will not merely show that this identity theory is too narrowly conceived, but that the application of principles to date only represents a partial area of their possible applications. The context of fundamental rights review has suggested to representatives such as Robert Alexy that it is purely the nature of principles to collide with each other and that the task of the practitioner is purely to resolve this collision by means of a proportionality test. However, the application of the development process of customary international law shows that a complementary application of principles is equally possible. The highly praised optimisation requirement is merely attributable to the specific circumstances and is rather based on a general optimisation possibility.The result is twofold. On one side, it is an internal justification scheme that rationalises the development process of customary international law in the sense of an internal justification, whereby a cooperation behaviour between the sub-parameters within the development elements is to be depicted. On the other side, it is a fully developed test to identify the emergence of customary international law in practice.

Keywords: balancing, consuetudo, customary international law, formal principles, opinio iuris, proportionality, weight formula

Procedia PDF Downloads 8
1450 Patterns of TV Simultaneous Interpreting of Emotive Overtones in Trump’s Victory Speech from English into Arabic

Authors: Hanan Al-Jabri

Abstract:

Simultaneous interpreting is deemed to be the most challenging mode of interpreting by many scholars. The special constraints involved in this task including time constraints, different linguistic systems, and stress pose a great challenge to most interpreters. These constraints are likely to maximise when the interpreting task is done live on TV. The TV interpreter is exposed to a wide variety of audiences with different backgrounds and needs and is mostly asked to interpret high profile tasks which raise his/her levels of stress, which further complicate the task. Under these constraints, which require fast and efficient performance, TV interpreters of four TV channels were asked to render Trump's victory speech into Arabic. However, they had also to deal with the burden of rendering English emotive overtones employed by the speaker into a whole different linguistic system. The current study aims at investigating the way TV interpreters, who worked in the simultaneous mode, handled this task; it aims at exploring and evaluating the TV interpreters’ linguistic choices and whether the original emotive effect was maintained, upgraded, downgraded or abandoned in their renditions. It also aims at exploring the possible difficulties and challenges that emerged during this process and might have influenced the interpreters’ linguistic choices. To achieve its aims, the study analysed Trump’s victory speech delivered on November 6, 2016, along with four Arabic simultaneous interpretations produced by four TV channels: Al-Jazeera, RT, CBC News, and France 24. The analysis of the study relied on two frameworks: a macro and a micro framework. The former presents an overview of the wider context of the English speech as well as an overview of the speaker and his political background to help understand the linguistic choices he made in the speech, and the latter framework investigates the linguistic tools which were employed by the speaker to stir people’s emotions. These tools were investigated based on Shamaa’s (1978) classification of emotive meaning according to their linguistic level: phonological, morphological, syntactic, and semantic and lexical levels. Moreover, this level investigates the patterns of rendition which were detected in the Arabic deliveries. The results of the study identified different rendition patterns in the Arabic deliveries, including parallel rendition, approximation, condensation, elaboration, transformation, expansion, generalisation, explicitation, paraphrase, and omission. The emerging patterns, as suggested by the analysis, were influenced by factors such as speedy and continuous delivery of some stretches, and highly-dense segments among other factors. The study aims to contribute to a better understanding of TV simultaneous interpreting between English and Arabic, as well as the practices of TV interpreters when rendering emotiveness especially that little is known about interpreting practices in the field of TV, particularly between Arabic and English.

Keywords: emotive overtones, interpreting strategies, political speeches, TV interpreting

Procedia PDF Downloads 162
1449 Translation and Validation of the Pain Resilience Scale in a French Population Suffering from Chronic Pain

Authors: Angeliki Gkiouzeli, Christine Rotonda, Elise Eby, Claire Touchet, Marie-Jo Brennstuhl, Cyril Tarquinio

Abstract:

Resilience is a psychological concept of possible relevance to the development and maintenance of chronic pain (CP). It refers to the ability of individuals to maintain reasonably healthy levels of physical and psychological functioning when exposed to an isolated and potentially highly disruptive event. Extensive research in recent years has supported the importance of this concept in the CP literature. Increased levels of resilience were associated with lower levels of perceived pain intensity and better mental health outcomes in adults with persistent pain. The ongoing project seeks to include the concept of pain-specific resilience in the French literature in order to provide more appropriate measures for assessing and understanding the complexities of CP in the near future. To the best of our knowledge, there is currently no validated version of the pain-specific resilience measure, the Pain Resilience scale (PRS), for French-speaking populations. Therefore, the present work aims to address this gap, firstly by performing a linguistic and cultural translation of the scale into French and secondly by studying the internal validity and reliability of the PRS for French CP populations. The forward-translation-back translation methodology was used to achieve as perfect a cultural and linguistic translation as possible according to the recommendations of the COSMIN (Consensus-based Standards for the selection of health Measurement Instruments) group, and an online survey is currently conducted among a representative sample of the French population suffering from CP. To date, the survey has involved one hundred respondents, with a total target of around three hundred participants at its completion. We further seek to study the metric properties of the French version of the PRS, ''L’Echelle de Résilience à la Douleur spécifique pour les Douleurs Chroniques'' (ERD-DC), in French patients suffering from CP, assessing the level of pain resilience in the context of CP. Finally, we will explore the relationship between the level of pain resilience in the context of CP and other variables of interest commonly assessed in pain research and treatment (i.e., general resilience, self-efficacy, pain catastrophising, and quality of life). This study will provide an overview of the methodology used to address our research objectives. We will also present for the first time the main findings and further discuss the validity of the scale in the field of CP research and pain management. We hope that this tool will provide a better understanding of how CP-specific resilience processes can influence the development and maintenance of this disease. This could ultimately result in better treatment strategies specifically tailored to individual needs, thus leading to reduced healthcare costs and improved patient well-being.

Keywords: chronic pain, pain measure, pain resilience, questionnaire adaptation

Procedia PDF Downloads 90
1448 Redesigning Clinical and Nursing Informatics Capstones

Authors: Sue S. Feldman

Abstract:

As clinical and nursing informatics mature, an area that has gotten a lot of attention is the value capstone projects. Capstones are meant to address authentic and complex domain-specific problems. While capstone projects have not always been essential in graduate clinical and nursing informatics education, employers are wanting to see evidence of the prospective employee's knowledge and skills as an indication of employability. Capstones can be organized in many ways: a single course over a single semester, multiple courses over multiple semesters, as a targeted demonstration of skills, as a synthesis of prior knowledge and skills, mentored by one single person or mentored by various people, submitted as an assignment or presented in front of a panel. Because of the potential for capstones to enhance the educational experience, and as a mechanism for application of knowledge and demonstration of skills, a rigorous capstone can accelerate a graduate's potential in the workforce. In 2016, the capstone at the University of Alabama at Birmingham (UAB) could feel the external forces of a maturing Clinical and Nursing Informatics discipline. While the program had a capstone course for many years, it was lacking the depth of knowledge and demonstration of skills being asked for by those hiring in a maturing Informatics field. Since the program is online, all capstones were always in the online environment. While this modality did not change, other contributors to instruction modality changed. Pre-2016, the instruction modality was self-guided. Students checked in with a single instructor, and that instructor monitored progress across all capstones toward a PowerPoint and written paper deliverable. At the time, the enrollment was few, and the maturity had not yet pushed hard enough. By 2017, doubling enrollment and the increased demand of a more rigorously trained workforce led to restructuring the capstone so that graduates would have and retain the skills learned in the capstone process. There were three major changes: the capstone was broken up into a 3-course sequence (meaning it lasted about 10 months instead of 14 weeks), there were many chunks of deliverables, and each faculty had a cadre of about 5 students to advise through the capstone process. Literature suggests that the chunking, breaking up complex projects (i.e., the capstone in one summer) into smaller, more manageable chunks (i.e., chunks of the capstone across 3 semesters), can increase and sustain learning while allowing for increased rigor. By doing this, the teaching responsibility was shared across faculty with each semester course being taught by a different faculty member. This change facilitated delving much deeper in instruction and produced a significantly more rigorous final deliverable. Having students advised across the faculty seemed like the right thing to do. It not only shared the load, but also shared the success of students. Furthermore, it meant that students could be placed with an academic advisor who had expertise in their capstone area, further increasing the rigor of the entire capstone process and project and increasing student knowledge and skills.

Keywords: capstones, clinical informatics, health informatics, informatics

Procedia PDF Downloads 133
1447 Retrofitting Insulation to Historic Masonry Buildings: Improving Thermal Performance and Maintaining Moisture Movement to Minimize Condensation Risk

Authors: Moses Jenkins

Abstract:

Much of the focus when improving energy efficiency in buildings fall on the raising of standards within new build dwellings. However, as a significant proportion of the building stock across Europe is of historic or traditional construction, there is also a pressing need to improve the thermal performance of structures of this sort. On average, around twenty percent of buildings across Europe are built of historic masonry construction. In order to meet carbon reduction targets, these buildings will require to be retrofitted with insulation to improve their thermal performance. At the same time, there is also a need to balance this with maintaining the ability of historic masonry construction to allow moisture movement through building fabric to take place. This moisture transfer, often referred to as 'breathable construction', is critical to the success, or otherwise, of retrofit projects. The significance of this paper is to demonstrate that substantial thermal improvements can be made to historic buildings whilst avoiding damage to building fabric through surface or interstitial condensation. The paper will analyze the results of a wide range of retrofit measures installed to twenty buildings as part of Historic Environment Scotland's technical research program. This program has been active for fourteen years and has seen interventions across a wide range of building types, using over thirty different methods and materials to improve the thermal performance of historic buildings. The first part of the paper will present the range of interventions which have been made. This includes insulating mass masonry walls both internally and externally, warm and cold roof insulation and improvements to floors. The second part of the paper will present the results of monitoring work which has taken place to these buildings after being retrofitted. This will be in terms of both thermal improvement, expressed as a U-value as defined in BS EN ISO 7345:1987, and also, crucially, will present the results of moisture monitoring both on the surface of masonry walls the following retrofit and also within the masonry itself. The aim of this moisture monitoring is to establish if there are any problems with interstitial condensation. This monitoring utilizes Interstitial Hygrothermal Gradient Monitoring (IHGM) and similar methods to establish relative humidity on the surface of and within the masonry. The results of the testing are clear and significant for retrofit projects across Europe. Where a building is of historic construction the use of materials for wall, roof and floor insulation which are permeable to moisture vapor provides both significant thermal improvements (achieving a u-value as low as 0.2 Wm²K) whilst avoiding problems of both surface and intestinal condensation. As the evidence which will be presented in the paper comes from monitoring work in buildings rather than theoretical modeling, there are many important lessons which can be learned and which can inform retrofit projects to historic buildings throughout Europe.

Keywords: insulation, condensation, masonry, historic

Procedia PDF Downloads 174
1446 Influence of the Nature of Plants on Drainage, Purification Performance and Quality of Biosolids on Faecal Sludge Planted Drying Beds in Sub-Saharan Climate Conditions

Authors: El Hadji Mamadou Sonko, Mbaye Mbéguéré, Cheikh Diop, Linda Strande

Abstract:

In new approaches that are being developed for the treatment of sludge, the valorization of by-product is increasingly encouraged. In this perspective, Echinochloa pyramidalis has been successfully tested in Cameroon. Echinochloa pyramidalis is an efficient forage plant in the treatment of faecal sludge. It provides high removal rates and biosolids of high agronomic value. Thus in order to advise the use of this plant in planted drying beds in Senegal its comparison with the plants long been used in the field deserves to be carried out. That is the aim of this study showing the influence of the nature of the plants on the drainage, the purifying performances and the quality of the biosolids. Echinochloa pyramidalis, Typha australis, and Phragmites australis are the three macrophytes used in this study. The drainage properties of the beds were monitored through the frequency of clogging, the percentage of recovered leachate and the dryness of the accumulated sludge. The development of plants was followed through the measurement of the density. The purification performances were evaluated from the incoming raw sludge flows and the outflows of leachate for parameters such as Total Solids (TS), Total Suspended Solids (TSS), Total Volatile Solids (TVS), Chemical Oxygen Demand (COD), Total Kjeldahl Nitrogen (TKN), Ammonia (NH₄⁺), Nitrate (NO₃⁻), Total Phosphorus (TP), Orthophosphorus (PO₄³⁻) and Ascaris eggs. The quality of the biosolids accumulated on the beds was measured after 3 months of maturation for parameters such as dryness, C/N ratio NH₄⁺/NO₃⁻ ratio, ammonia, Ascaris eggs. The results have shown that the recovered leachate volume is about 40.4%; 45.6% and 47.3%; the dryness about 41.7%; 38.7% and 28.7%, and clogging frequencies about 6.7%; 8.2% and 14.2% on average for the beds planted with Echinochloa pyramidalis, Typha australis and Phragmites australis respectively. The plants of Echinochloa pyramidalis (198.6 plants/m²) and Phragmites australis (138 plants/m²) have higher densities than Typha australis (90.3 plants/m²). The nature of the plants has no influence on the purification performance with reduction percentages around 80% or more for all the parameters followed whatever the nature of the plants. However, the concentrations of these various leachate pollutants are above the limit values of the Senegalese standard NS 05-061 for the release into the environment. The biosolids harvested after 3 months of maturation are all mature with C/N ratios around 10 for all the macrophytes. The NH₄⁺/NO₃⁻ ratio is lower than 1 except for the biosolids originating from the Echinochloa pyramidalis beds. The ammonia is also less than 0.4 g/kg except for biosolids from Typha australis beds. Biosolids are also rich in mineral elements. Their concentrations of Ascaris eggs are higher than the WHO recommendations despite a percentage of inactivation around 80%. These biosolids must be stored for an additional time or composted. From these results, the use of Echinochloa pyramidalis as the main macrophyte can be recommended in the various drying beds planted in sub-Saharan climate conditions.

Keywords: faecal sludge, nature of plants, quality of biosolids, treatment performances

Procedia PDF Downloads 171
1445 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer

Authors: A. J. Cobley, L. Krishnan

Abstract:

The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.

Keywords: degassing, low frequency ultrasound, polymer composites, voids

Procedia PDF Downloads 296
1444 Act Local, Think Global: Superior Institute of Engineering of Porto Campaign for a Sustainable Campus

Authors: R. F. Mesquita Brandão

Abstract:

Act Local, Think Global is the name of a campaign implemented at Superior Institute of Engineering of Porto (ISEP), one of schools of Polytechnic of Porto, with the main objective of increase the sustainability of the campus. ISEP has a campus with 52.000 m2 and more than 7.000 students. The campaign started in 2019 and the results are very clear. In 2019 only 16% of the waste created in the campus was correctly separate for recycling and now almost 50% of waste goes to the correct waste container. Actions to reduce the energy consumption were implemented with significantly results. One of the major problems in the campus are the water leaks. To solve this problem was implemented a methodology for water monitoring during the night, a period of time where consumptions are normally low. If water consumption in the period is higher than a determinate value it may mean a water leak and an alarm is created to the maintenance teams. In terms of energy savings, some measurements were implemented to create savings in energy consumption and in equivalent CO₂ produced. In order to reduce the use of plastics in the campus, was implemented the prohibition of selling 33 cl plastic water bottles and in collaboration with the students association all meals served in the restaurants changed the water plastic bottle for a glass that can be refilled with water in the water dispensers. This measures created a reduction of use of more than 75.000 plastic bottles per year. In parallel was implemented the ISEP water glass bottle to be used in all scientific meetings and events. Has a way of involving all community in sustainability issues was developed and implemented a vertical garden in aquaponic system. In 2019, the first vertical garden without soil was installed inside a large campus building. The system occupies the entire exterior façade (3 floors) of the entrance to ISEP's G building. On each of these floors there is a planter with 42 positions available for plants. Lettuces, strawberries, peppers are examples of some vegetable produced that can be collected by the entire community. Associated to the vertical garden was developed a monitoring system were some parameters of the system are monitored. This project is under development because it will work in a stand-alone energy feeding, with the use of photovoltaic panels for production of energy necessities. All the system was, and still is, developed by students and teachers and is used in class projects of some ISEP courses. These and others measures implemented in the campus, will be more developed in the full paper, as well as all the results obtained, allowed ISEP to be the first Portuguese high school to obtain the certification “Coração Verde” (Green Heart), awarded by LIPOR, a Portuguese company with the mission of transform waste into new resources through the implementation of innovative and circular practices, generating and sharing value.

Keywords: aquaponics, energy efficiency, recycling, sustainability, waste separation

Procedia PDF Downloads 96