Search results for: estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1848

Search results for: estimation

108 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes

Authors: Mohsen Hababalahi, Morteza Bastami

Abstract:

Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.

Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method

Procedia PDF Downloads 488
107 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies

Authors: Pragati Sirohi, Vivek Singh Rana

Abstract:

Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.

Keywords: brand value, FMCG, market capitalization, net worth

Procedia PDF Downloads 332
106 Distributional and Developmental Analysis of PM2.5 in Beijing, China

Authors: Alexander K. Guo

Abstract:

PM2.5 poses a large threat to people’s health and the environment and is an issue of large concern in Beijing, brought to the attention of the government by the media. In addition, both the United States Embassy in Beijing and the government of China have increased monitoring of PM2.5 in recent years, and have made real-time data available to the public. This report utilizes hourly historical data (2008-2016) from the U.S. Embassy in Beijing for the first time. The first objective was to attempt to fit probability distributions to the data to better predict a number of days exceeding the standard, and the second was to uncover any yearly, seasonal, monthly, daily, and hourly patterns and trends that may arise to better understand of air control policy. In these data, 66,650 hours and 2687 days provided valid data. Lognormal, gamma, and Weibull distributions were fit to the data through an estimation of parameters. The Chi-squared test was employed to compare the actual data with the fitted distributions. The data were used to uncover trends, patterns, and improvements in PM2.5 concentration over the period of time with valid data in addition to specific periods of time that received large amounts of media attention, analyzed to gain a better understanding of causes of air pollution. The data show a clear indication that Beijing’s air quality is unhealthy, with an average of 94.07µg/m3 across all 66,650 hours with valid data. It was found that no distribution fit the entire dataset of all 2687 days well, but each of the three above distribution types was optimal in at least one of the yearly data sets, with the lognormal distribution found to fit recent years better. An improvement in air quality beginning in 2014 was discovered, with the first five months of 2016 reporting an average PM2.5 concentration that is 23.8% lower than the average of the same period in all years, perhaps the result of various new pollution-control policies. It was also found that the winter and fall months contained more days in both good and extremely polluted categories, leading to a higher average but a comparable median in these months. Additionally, the evening hours, especially in the winter, reported much higher PM2.5 concentrations than the afternoon hours, possibly due to the prohibition of trucks in the city in the daytime and the increased use of coal for heating in the colder months when residents are home in the evening. Lastly, through analysis of special intervals that attracted media attention for either unnaturally good or bad air quality, the government’s temporary pollution control measures, such as more intensive road-space rationing and factory closures, are shown to be effective. In summary, air quality in Beijing is improving steadily and do follow standard probability distributions to an extent, but still needs improvement. Analysis will be updated when new data become available.

Keywords: Beijing, distribution, patterns, pm2.5, trends

Procedia PDF Downloads 221
105 A Bayesian Approach for Health Workforce Planning in Portugal

Authors: Diana F. Lopes, Jorge Simoes, José Martins, Eduardo Castro

Abstract:

Health professionals are the keystone of any health system, by delivering health services to the population. Given the time and cost involved in training new health professionals, the planning process of the health workforce is particularly important as it ensures a proper balance between the supply and demand of these professionals and it plays a central role on the Health 2020 policy. In the past 40 years, the planning of the health workforce in Portugal has been conducted in a reactive way lacking a prospective vision based on an integrated, comprehensive and valid analysis. This situation may compromise not only the productivity and the overall socio-economic development but the quality of the healthcare services delivered to patients. This is even more critical given the expected shortage of the health workforce in the future. Furthermore, Portugal is facing an aging context of some professional classes (physicians and nurses). In 2015, 54% of physicians in Portugal were over 50 years old, and 30% of all members were over 60 years old. This phenomenon associated to an increasing emigration of young health professionals and a change in the citizens’ illness profiles and expectations must be considered when planning resources in healthcare. The perspective of sudden retirement of large groups of professionals in a short time is also a major problem to address. Another challenge to embrace is the health workforce imbalances, in which Portugal has one of the lowest nurse to physician ratio, 1.5, below the European Region and the OECD averages (2.2 and 2.8, respectively). Within the scope of the HEALTH 2040 project – which aims to estimate the ‘Future needs of human health resources in Portugal till 2040’ – the present study intends to get a comprehensive dynamic approach of the problem, by (i) estimating the needs of physicians and nurses in Portugal, by specialties and by quinquenium till 2040; (ii) identifying the training needs of physicians and nurses, in medium and long term, till 2040, and (iii) estimating the number of students that must be admitted into medicine and nursing training systems, each year, considering the different categories of specialties. The development of such approach is significantly more critical in the context of limited budget resources and changing health care needs. In this context, this study presents the drivers of the healthcare needs’ evolution (such as the demographic and technological evolution, the future expectations of the users of the health systems) and it proposes a Bayesian methodology, combining the best available data with experts opinion, to model such evolution. Preliminary results considering different plausible scenarios are presented. The proposed methodology will be integrated in a user-friendly decision support system so it can be used by politicians, with the potential to measure the impact of health policies, both at the regional and the national level.

Keywords: bayesian estimation, health economics, health workforce planning, human health resources planning

Procedia PDF Downloads 225
104 Estimating Evapotranspiration Irrigated Maize in Brazil Using a Hybrid Modelling Approach and Satellite Image Inputs

Authors: Ivo Zution Goncalves, Christopher M. U. Neale, Hiran Medeiros, Everardo Mantovani, Natalia Souza

Abstract:

Multispectral and thermal infrared imagery from satellite sensors coupled with climate and soil datasets were used to estimate evapotranspiration and biomass in center pivots planted to maize in Brazil during the 2016 season. The hybrid remote sensing based model named Spatial EvapoTranspiration Modelling Interface (SETMI) was applied using multispectral and thermal infrared imagery from the Landsat Thematic Mapper instrument. Field data collected by the IRRIGER center pivot management company included daily weather information such as maximum and minimum temperature, precipitation, relative humidity for estimating reference evapotranspiration. In addition, soil water content data were obtained every 0.20 m in the soil profile down to 0.60 m depth throughout the season. Early season soil samples were used to obtain water-holding capacity, wilting point, saturated hydraulic conductivity, initial volumetric soil water content, layer thickness, and saturated volumetric water content. Crop canopy development parameters and irrigation application depths were also inputs of the model. The modeling approach is based on the reflectance-based crop coefficient approach contained within the SETMI hybrid ET model using relationships developed in Nebraska. The model was applied to several fields located in Minas Gerais State in Brazil with approximate latitude: -16.630434 and longitude: -47.192876. The model provides estimates of real crop evapotranspiration (ET), crop irrigation requirements and all soil water balance outputs, including biomass estimation using multi-temporal satellite image inputs. An interpolation scheme based on the growing degree-day concept was used to model the periods between satellite inputs, filling the gaps between image dates and obtaining daily data. Actual and accumulated ET, accumulated cold temperature and water stress and crop water requirements estimated by the model were compared with data measured at the experimental fields. Results indicate that the SETMI modeling approach using data assimilation, showed reliable daily ET and crop water requirements for maize, interpolated between remote sensing observations, confirming the applicability of the SETMI model using new relationships developed in Nebraska for estimating mainly ET and water requirements in Brazil under tropical conditions.

Keywords: basal crop coefficient, irrigation, remote sensing, SETMI

Procedia PDF Downloads 120
103 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction

Authors: Alisawi Alaa T., Collins P. E. F.

Abstract:

The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.

Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard

Procedia PDF Downloads 73
102 Ecosystem Approach in Aquaculture: From Experimental Recirculating Multi-Trophic Aquaculture to Operational System in Marsh Ponds

Authors: R. Simide, T. Miard

Abstract:

Integrated multi-trophic aquaculture (IMTA) is used to reduce waste from aquaculture and increase productivity by co-cultured species. In this study, we designed a recirculating multi-trophic aquaculture system which requires low energy consumption, low water renewal and easy-care. European seabass (Dicentrarchus labrax) were raised with co-cultured sea urchin (Paracentrotus lividus), deteritivorous polychaete fed on settled particulate matter, mussels (Mytilus galloprovincialis) used to extract suspended matters, macroalgae (Ulva sp.) used to uptake dissolved nutrients and gastropod (Phorcus turbinatus) used to clean the series of 4 tanks from fouling. Experiment was performed in triplicate during one month in autumn under an experimental greenhouse at the Institute Océanographique Paul Ricard (IOPR). Thanks to the absence of a physical filter, any pomp was needed to pressure water and the water flow was carried out by a single air-lift followed by gravity flow.Total suspended solids (TSS), biochemical oxygen demand (BOD5), turbidity, phytoplankton estimation and dissolved nutrients (ammonium NH₄, nitrite NO₂⁻, nitrate NO₃⁻ and phosphorus PO₄³⁻) were measured weekly while dissolved oxygen and pH were continuously recorded. Dissolved nutrients stay under the detectable threshold during the experiment. BOD5 decreased between fish and macroalgae tanks. TSS highly increased after 2 weeks and then decreased at the end of the experiment. Those results show that bioremediation can be well used for aquaculture system to keep optimum growing conditions. Fish were the only feeding species by an external product (commercial fish pellet) in the system. The others species (extractive species) were fed from waste streams from the tank above or from Ulva produced by the system for the sea urchin. In this way, between the fish aquaculture only and the addition of the extractive species, the biomass productivity increase by 5.7. In other words, the food conversion ratio dropped from 1.08 with fish only to 0.189 including all species. This experimental recirculating multi-trophic aquaculture system was efficient enough to reduce waste and increase productivity. In a second time, this technology has been reproduced at a commercial scale. The IOPR in collaboration with Les 4 Marais company run for 6 month a recirculating IMTA in 8000 m² of water allocate between 4 marsh ponds. A similar air-lift and gravity recirculating system was design and only one feeding species of shrimp (Palaemon sp.) was growth for 3 extractive species. Thanks to this joint work at the laboratory and commercial scales we will be able to challenge IMTA system and discuss about this sustainable aquaculture technology.

Keywords: bioremediation, integrated multi-trophic aquaculture (IMTA), laboratory and commercial scales, recirculating aquaculture, sustainable

Procedia PDF Downloads 132
101 An Integrated Framework for Wind-Wave Study in Lakes

Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung

Abstract:

The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.

Keywords: wave modelling, wind-wave, extreme value analysis, marina

Procedia PDF Downloads 54
100 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 134
99 Regional Disparities in Microfinance Distribution: Evidence from Indian States

Authors: Sunil Sangwan, Narayan Chandra Nayak

Abstract:

Over the last few decades, Indian banking system has achieved remarkable growth in its credit volume. However, one of the most disturbing facts about this growth is the uneven distribution of financial services across regions. Having witnessed limited success from all the earlier efforts towards financial inclusion targeting the rural poor and the underprivileged, provision of microfinance, of late, has emerged as a supplementary mechanism. There are two prominent modes of microfinance distribution in India namely Bank-SHG linkage (SBLP) and private Microfinance Institutions (MFIs). Ironically, such efforts also seem to have failed to achieve the desired targets as the microfinance services have witnessed skewed distribution across the states of the country. This study attempts to make a comparative analysis of the geographical skew of the SBLP and MFI in India and examine the factors influencing their regional distribution. The results indicate that microfinance services are largely concentrated in the southern region, accounting for about 50% of all microfinance clients and 49% of all microfinance loan portfolios. This is distantly followed by an eastern region where client outreach is close to 25% only. The north-eastern, northern, central, and western regions lag far behind in microfinance sectors, accounting for only 4%, 4%, 10%, and 7 % client outreach respectively. The penetration of SHGs is equally skewed, with the southern region accounting for 46% of client outreach and 70% of loan portfolios followed by an eastern region with 21% of client outreach and 13% of the loan portfolio. Contrarily, north-eastern, northern, central, western and eastern regions account for 5%, 5%, 10%, and 13% of client outreach and 3%, 3%, 7%, and 4% of loan portfolios respectively. The study examines the impact of literacy rate, rural poverty, population density, primary sector share, non-farm activities, loan default behavior and bank penetration on the microfinance penetration. The study is limited to 17 major states of the country over the period 2008-2014. The results of the GMM estimation indicate the significant positive impact of literacy rate, non-farm activities and population density on microfinance penetration across the states, while the rise in loan default seems to deter it. Rural poverty shows the significant negative impact on the spread of SBLP, while it has a positive impact on MFI penetration, hence indicating the policy of exclusion being adhered to by the formal financial system especially towards the poor. However, MFIs seem to be working as substitute mechanisms to banks to fill the gap. The findings of the study are a pointer towards enhancing financial literacy, non-farm activities, rural bank penetration and containing loan default for achieving greater microfinance prevalence.

Keywords: bank penetration, literacy rate, microfinance, primary sector share, rural non-farm activities, rural poverty

Procedia PDF Downloads 211
98 Comparative Vector Susceptibility for Dengue Virus and Their Co-Infection in A. aegypti and A. albopictus

Authors: Monika Soni, Chandra Bhattacharya, Siraj Ahmed Ahmed, Prafulla Dutta

Abstract:

Dengue is now a globally important arboviral disease. Extensive vector surveillance has already established A.aegypti as a primary vector, but A.albopictus is now accelerating the situation through gradual adaptation to human surroundings. Global destabilization and gradual climatic shift with rising in temperature have significantly expanded the geographic range of these species These versatile vectors also host Chikungunya, Zika, and yellow fever virus. Biggest challenge faced by endemic countries now is upsurge in co-infection reported with multiple serotypes and virus co-circulation. To foster vector control interventions and mitigate disease burden, there is surge for knowledge on vector susceptibility and viral tolerance in response to multiple infections. To address our understanding on transmission dynamics and reproductive fitness, both the vectors were exposed to single and dual combinations of all four dengue serotypes by artificial feeding and followed up to third generation. Artificial feeding observed significant difference in feeding rate for both the species where A.albopictus was poor artificial feeder (35-50%) compared to A.aegypti (95-97%) Robust sequential screening of viral antigen in mosquitoes was followed by Dengue NS1 ELISA, RT-PCR and Quantitative PCR. To observe viral dissemination in different mosquito tissues Indirect immunofluorescence assay was performed. Result showed that both the vectors were infected initially with all dengue(1-4)serotypes and its co-infection (D1 and D2, D1 and D3, D1 and D4, D2 and D4) combinations. In case of DENV-2 there was significant difference in the peak titer observed at 16th day post infection. But when exposed to dual infections A.aegypti supported all combinations of virus where A.albopictus only continued single infections in successive days. There was a significant negative effect on the fecundity and fertility of both the vectors compared to control (PANOVA < 0.001). In case of dengue 2 infected mosquito, fecundity in parent generation was significantly higher (PBonferroni < 0.001) for A.albopicus compare to A.aegypti but there was a complete loss of fecundity from second to third generation for A.albopictus. It was observed that A.aegypti becomes infected with multiple serotypes frequently even at low viral titres compared to A.albopictus. Possible reason for this could be the presence of wolbachia infection in A.albopictus or mosquito innate immune response, small RNA interference etc. Based on the observations it could be anticipated that transovarial transmission may not be an important phenomenon for clinical disease outcome, due to the absence of viral positivity by third generation. Also, Dengue NS1 ELISA can be used for preliminary viral detection in mosquitoes as more than 90% of the samples were found positive compared to RT-PCR and viral load estimation.

Keywords: co-infection, dengue, reproductive fitness, viral quantification

Procedia PDF Downloads 178
97 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning

Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi

Abstract:

Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.

Keywords: agriculture, computer vision, data science, geospatial technology

Procedia PDF Downloads 105
96 Religious Capital and Entrepreneurial Behavior in Small Businesses: The Importance of Entrepreneurial Creativity

Authors: Waleed Omri

Abstract:

With the growth of the small business sector in emerging markets, developing a better understanding of what drives 'day-to-day' entrepreneurial activities has become an important issue for academicians and practitioners. Innovation, as an entrepreneurial behavior, revolves around individuals who creatively engage in new organizational efforts. In a similar vein, the innovation behaviors and processes at the organizational member level are central to any corporate entrepreneurship strategy. Despite the broadly acknowledged importance of entrepreneurship and innovation at the individual level in the establishment of successful ventures, the literature lacks evidence on how entrepreneurs can effectively harness their skills and knowledge in the workplace. The existing literature illustrates that religion can impact the day-to-day work behavior of entrepreneurs, managers, and employees. Religious beliefs and practices could affect daily entrepreneurial activities by fostering mental abilities and traits such as creativity, intelligence, and self-efficacy. In the present study, we define religious capital as a set of personal and intangible resources, skills, and competencies that emanate from an individual’s religious values, beliefs, practices, and experiences and may be used to increase the quality of economic activities. Religious beliefs and practices give individuals a religious satisfaction, which can lead them to perform better in the workplace. In addition, religious ethics and practices have been linked to various positive employee outcomes in terms of organizational change, job satisfaction, and entrepreneurial intensity. As investigations of their consequences beyond direct task performance are still scarce, we explore if religious capital plays a role in entrepreneurs’ innovative behavior. In sum, this study explores the determinants of individual entrepreneurial behavior by investigating the relationship between religious capital and entrepreneurs’ innovative behavior in the context of small businesses. To further explain and clarify the religious capital-innovative behavior link, the present study proposes a model to examine the mediating role of entrepreneurial creativity. We use both Islamic work ethics (IWE) and Islamic religious practices (IRP) to measure Islamic religious capital. We use structural equation modeling with a robust maximum likelihood estimation to analyze data gathered from 289 Tunisian small businesses and to explore the relationships among the above-described variables. In line with the theory of planned behavior, only religious work ethics are found to increase the innovative behavior of small businesses’ owner-managers. Our findings also clearly demonstrate that the connection between religious capital-related variables and innovative behavior is better understood if the influence of entrepreneurial creativity, as a mediating variable of the aforementioned relationship, is taken into account. By incorporating both religious capital and entrepreneurial creativity into the innovative behavior analysis, this study provides several important practical implications for promoting innovation process in small businesses.

Keywords: entrepreneurial behavior, small business, religion, creativity

Procedia PDF Downloads 211
95 Effect of Vitrification on Embryos Euploidy Obtained from Thawed Oocytes

Authors: Natalia Buderatskaya, Igor Ilyin, Julia Gontar, Sergey Lavrynenko, Olga Parnitskaya, Ekaterina Ilyina, Eduard Kapustin, Yana Lakhno

Abstract:

Introduction: It is known that cryopreservation of oocytes has peculiar features due to the complex structure of the oocyte. One of the most important features is that mature oocytes contain meiotic division spindle which is very sensitive even to the slightest variation in temperature. Thus, the main objective of this study is to analyse the resulting euploid embryos obtained from thawed oocytes in comparison with the data of preimplantation genetic screening (PGS) in fresh embryo cycles. Material and Methods: The study was conducted at 'Medical Centre IGR' from January to July 2016. Data were analysed for 908 donor oocytes obtained in 67 cycles of assisted reproductive technologies (ART), of which 693 oocytes were used in the 51 'fresh' cycles (group A), and 215 oocytes - 16 ART programs with vitrification female gametes (group B). The average age of donors in the groups match 27.3±2.9 and 27.8±6.6 years. Stimulation of superovulation was conducted the standard way. Vitrification was performed in 1-2 hours after transvaginal puncture and thawing of oocytes were carried out in accordance with the standard protocol of Cryotech (Japan). Manipulation ICSI was performed 4-5 hours after transvaginal follicle puncture for fresh oocytes, or after defrosting - for vitrified female gametes. For the PGS, an embryonic biopsy was done on the third or on the fifth day after fertilization. Diagnostic procedures were performed using fluorescence in situ hybridization with the study of such chromosomes as 13, 16, 18, 21, 22, X, Y. Only morphologically quality blastocysts were used for the transfer, the estimation of which corresponded to the Gardner criteria. The statistical hypotheses were done using the criteria t, x^2 at a significance levels p<0.05, p<0.01, p<0.001. Results: The mean number of mature oocytes per cycle in group A was 13.58±6.65 and in group B - 13.44±6.68 oocytes for patient. The survival of oocytes after thawing totaled 95.3% (n=205), which indicates a highly effective quality of performed vitrification. The proportion of zygotes in the group A corresponded to 91.1%(n=631), in the group B – 80.5%(n=165), which shows statistically significant difference between the groups (p<0.001) and explained by non-viable oocytes elimination after vitrification. This is confirmed by the fact that on the fifth day of embryos development a statistically significant difference in the number of blastocysts was absent (p>0.05), and constituted respectively 61.6%(n=389) and 63.0%(n=104) in the groups. For the PGS performing 250 embryos analyzed in the group A and 72 embryos - in the group B. The results showed that euploidy in the studied chromosomes were 40.0%(n=100) embryos in the group A and 41.7% (n=30) - in the group B, which shows no statistical significant difference (p>0.05). The indicators of clinical pregnancies in the groups amounted to 64.7% (22 pregnancies per 34 embryo transfers) and 61.5% (8 pregnancies per 13 embryo transfers) respectively, and also had no significant difference between the groups (p>0.05). Conclusions: The results showed that the vitrification does not affect the resulting euploid embryos in assisted reproductive technologies and are not reflected in their morphological characteristics in ART programs.

Keywords: euploid embryos, preimplantation genetic screening, thawing oocytes, vitrification

Procedia PDF Downloads 301
94 Correlation Between Different Radiological Findings and Histopathological diagnosis of Breast Diseases: Retrospective Review Conducted Over Sixth Years in King Fahad University Hospital in Eastern Province, Saudi Arabia

Authors: Sadeem Aljamaan, Reem Hariri, Rahaf Alghamdi, Batool Alotaibi, Batool Alsenan, Lama Althunayyan, Areej Alnemer

Abstract:

The aim of this study is to correlate between radiological findings and histopathological results in regard to the breast imaging-reporting and data system scores, size of breast masses, molecular subtypes and suspicious radiological features, as well as to assess the concordance rate in histological grade between core biopsy and surgical excision among breast cancer patients, followed by analyzing the change of concordance rate in relation to neoadjuvant chemotherapy in a Saudi population. A retrospective review was conducted over 6-year period (2017-2022) on all breast core biopsies of women preceded by radiological investigation. Chi-squared test (χ2) was performed on qualitative data, the Mann-Whitney test for quantitative non-parametric variables, and the Kappa test for grade agreement. A total of 641 cases were included. Ultrasound, mammography, and magnetic resonance imaging demonstrated diagnostic accuracies of 85%, 77.9% and 86.9%; respectively. magnetic resonance imaging manifested the highest sensitivity (72.2%), and the lowest was for ultrasound (61%). Concordance in tumor size with final excisions was best in magnetic resonance imaging, while mammography demonstrated a higher tendency of overestimation (41.9%), and ultrasound showed the highest underestimation (67.7%). The association between basal-like molecular subtypes and the breast imaging-reporting and data system score 5 classifications was statistically significant only for magnetic resonance imaging (p=0.04). Luminal subtypes demonstrated a significantly higher percentage of speculation in mammography. Breast imaging-reporting and data system score 4 manifested a substantial number of benign pathologies in all the 3 modalities. A fair concordance rate (k= 0.212 & 0.379) was demonstrated between excision and the preceding core biopsy grading with and without neoadjuvant therapy, respectively. The results demonstrated a down-grading in cases post-neoadjuvant therapy. In cases who did not receive neoadjuvant therapy, underestimation of tumor grade in biopsy was evident. In summary, magnetic resonance imaging had the highest sensitivity, specificity, positive predictive value and accuracy of both diagnosis and estimation of tumor size. Mammography demonstrated better sensitivity than ultrasound and had the highest negative predictive value, but ultrasound had better specificity, positive predictive value and accuracy. Therefore, the combination of different modalities is advantageous. The concordance rate of core biopsy grading with excision was not impacted by neoadjuvant therapy.

Keywords: breast cancer, mammography, MRI, neoadjuvant, pathology, US

Procedia PDF Downloads 55
93 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes

Authors: Nadarajah I. Ramesh

Abstract:

Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.

Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model

Procedia PDF Downloads 253
92 Experimental Investigation of Absorbent Regeneration Techniques to Lower the Cost of Combined CO₂ and SO₂ Capture Process

Authors: Bharti Garg, Ashleigh Cousins, Pauline Pearson, Vincent Verheyen, Paul Feron

Abstract:

The presence of SO₂ in power plant flue gases makes flue gas desulfurization (FGD) an essential requirement prior to post combustion CO₂ (PCC) removal facilities. Although most of the power plants worldwide deploy FGD in order to comply with environmental regulations, generally the achieved SO₂ levels are not sufficiently low for the flue gases to enter the PCC unit. The SO₂ level in the flue gases needs to be less than 10 ppm to effectively operate the PCC installation. The existing FGD units alone cannot bring down the SO₂ levels to or below 10 ppm as required for CO₂ capture. It might require an additional scrubber along with the existing FGD unit to bring the SO₂ to the desired levels. The absence of FGD units in Australian power plants brings an additional challenge. SO₂ concentrations in Australian power station flue gas emissions are in the range of 100-600 ppm. This imposes a serious barrier on the implementation of standard PCC technologies in Australia. CSIRO’s developed CS-Cap process is a unique solution to capture SO₂ and CO₂ in a single column with single absorbent which can potentially bring cost-effectiveness to the commercial deployment of carbon capture in Australia, by removing the need for FGD. Estimated savings of removing SO₂ through a similar process as CS-Cap is around 200 MMUSD for a 500 MW Australian power plant. Pilot plant trials conducted to generate the proof of concept resulted in 100% removal of SO₂ from flue gas without utilising standard limestone-based FGD. In this work, removal of absorbed sulfur from aqueous amine absorbents generated in the pilot plant trials has been investigated by reactive crystallisation and thermal reclamation. More than 95% of the aqueous amines can be reclaimed back from the sulfur loaded absorbent via reactive crystallisation. However, the recovery of amines through thermal reclamation is limited and depends on the sulfur loading on the spent absorbent. The initial experimental work revealed that reactive crystallisation is a better fit for CS-Cap’s sulfur-rich absorbent especially when it is also capable of generating K₂SO₄ crystals of highly saleable quality ~ 99%. Initial cost estimation carried on both the technologies resulted in almost similar capital expenditure; however, the operating cost is considerably higher in thermal reclaimer than that in crystalliser. The experimental data generated in the laboratory from both the regeneration techniques have been used to generate the simulation model in Aspen Plus. The simulation model illustrates the economic benefits which could be gained by removing flue gas desulfurization prior to standard PCC unit and replacing it with a CS-Cap absorber column co-capturing CO₂ and SO₂, and it's absorbent regeneration system which would be either reactive crystallisation or thermal reclamation.

Keywords: combined capture, cost analysis, crystallisation, CS-Cap, flue gas desulfurisation, regeneration, sulfur, thermal reclamation

Procedia PDF Downloads 100
91 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems

Authors: Georgi Y. Georgiev, Matthew Brouillet

Abstract:

This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.

Keywords: complexity, self-organization, agent based modelling, efficiency

Procedia PDF Downloads 43
90 Application of 2D Electrical Resistivity Tomographic Imaging Technique to Study Climate Induced Landslide and Slope Stability through the Analysis of Factor of Safety: A Case Study in Ooty Area, Tamil Nadu, India

Authors: S. Maniruzzaman, N. Ramanujam, Qazi Akhter Rasool, Swapan Kumar Biswas, P. Prasad, Chandrakanta Ojha

Abstract:

Landslide is one of the major natural disasters in South Asian countries. Applying 2D Electrical Resistivity Tomographic Imaging estimation of geometry, thickness, and depth of failure zone of the landslide can be made. Landslide is a pertinent problem in Nilgris plateau next to Himalaya. Nilgris range consists of hard Archean metamorphic rocks. Intense weathering prevailed during the Pre-Cambrian time had deformed the rocks up to 45m depth. The landslides are dominant in the southern and eastern part of plateau of is comparatively smaller than the northern drainage basins, as it has low density of drainage; coarse texture permitted the more of infiltration of rainwater, whereas in the northern part of the plateau entombed with high density of drainage pattern and fine texture with less infiltration than run off, and low to the susceptible to landslide. To get comprehensive information about the landslide zone 2D Electrical Resistivity Tomographic imaging study with CRM 500 Resistivity meter are used in Coonoor– Mettupalyam sector of Nilgiris plateau. To calculate Factor of Safety the infinite slope model of Brunsden and Prior is used. Factor of Safety can be expressed (FS) as the ratio of resisting forces to disturbing forces. If FS < 1 disturbing forces are larger than resisting forces and failure may occur. The geotechnical parameters of soil samples are calculated on the basis upon the apparent resistivity values for litho units of measured from 2D ERT image of the landslide zone. Relationship between friction angles for various soil properties is established by simple regression analysis from apparent resistivity data. Increase of water content in slide zone reduces the effectiveness of the shearing resistance and increase the sliding movement. Time-lapse resistivity changes to slope failure is determined through geophysical Factor of Safety which depends on resistivity and site topography. This ERT technique infers soil property at variable depths in wider areas. This approach to retrieve the soil property and overcomes the limit of the point of information provided by rain gauges and porous probes. Monitoring of slope stability without altering soil structure through the ERT technique is non-invasive with low cost. In landslide prone area an automated Electrical Resistivity Tomographic Imaging system should be installed permanently with electrode networks to monitor the hydraulic precursors to monitor landslide movement.

Keywords: 2D ERT, landslide, safety factor, slope stability

Procedia PDF Downloads 287
89 Fuzzy Availability Analysis of a Battery Production System

Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz

Abstract:

In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.

Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)

Procedia PDF Downloads 195
88 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces

Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur

Abstract:

In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.

Keywords: aerodynamic, bi-dimensional, vegetation, synergistic

Procedia PDF Downloads 243
87 Screening of Osteoporosis in Aging Populations

Authors: Massimiliano Panella, Sara Bortoluzzi, Sophia Russotto, Daniele Nicolini, Carmela Rinaldi

Abstract:

Osteoporosis affects more than 200 million people worldwide. About 75% of osteoporosis cases are undiagnosed or diagnosed only when a bone fracture occurs. Since osteoporosis related fractures are significant determinants of the burden of disease and health and social costs of aging populations, we believe that this is the early identification and treatment of high-risk patients should be a priority in actual healthcare systems. Screening for osteoporosis by dual energy x-ray absorptiometry (DEXA) is not cost-effective for general population. An alternative is pulse-echo ultrasound (PEUS) because of the minor costs. To this end, we developed an early detection program for osteoporosis with PEUS, and we evaluated is possible impact and sustainability. We conducted a cross-sectional study including 1,050 people in Italy. Subjects with >1 major or >2 minor risk factors for osteoporosis were invited to PEUS bone mass density (BMD) measurement at the proximal tibia. Based on BMD values, subjects were classified as healthy subjects (BMD>0.783 g/cm²) and pathological including subjects with suspected osteopenia (0.783≤BMD>0.719 g/cm²) or osteoporosis (BMD ≤ 0.719 g/cm²). The responder rate was 60.4% (634/1050). According to the risk, PEUS scan was recommended to 436 people, of whom 300 (mean age 45.2, 81% women) accepted to participate. We identified 240 (80%) healthy and 60 (20%) pathological subjects (47 osteopenic and 13 osteoporotic). We observed a significant association between high risk people and reduced bone density (p=0.043) with increased risks for female gender, older ages, and menopause (p<0.01). The yearly cost of the screening program was 8,242 euros. With actual Italian fracture incidence rates in osteoporotic patients, we can reasonably expect in 20 years that at least 6 fractures will occur in our sample. If we consider that the mean costs per fracture in Italy is today 16,785 euros, we can estimate a theoretical cost of 100,710 euros. According to literature, we can assume that the early treatment of osteoporosis could avoid 24,170 euros of such costs. If we add the actual yearly cost of the treatments to the cost of our program and we compare this final amount of 11,682 euros to the avoidable costs of fractures (24,170 euros) we can measure a possible positive benefits/costs ratio of 2.07. As a major outcome, our study let us to early identify 60 people with a significant bone loss that were not aware of their condition. This diagnostic anticipation constitutes an important element of value for the project, both for the patients, for the preventable negative outcomes caused by the fractures, and for the society in general, because of the related avoidable costs. Therefore, based on our finding, we believe that the PEUS based screening performed could be a cost-effective approach to early identify osteoporosis. However, our study has some major limitations. In fact, in our study the economic analysis is based on theoretical scenarios, thus specific studies are needed for a better estimation of the possible benefits and costs of our program.

Keywords: osteoporosis, prevention, public health, screening

Procedia PDF Downloads 99
86 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 94
85 The Effect of the Precursor Powder Size on the Electrical and Sensor Characteristics of Fully Stabilized Zirconia-Based Solid Electrolytes

Authors: Olga Yu Kurapova, Alexander V. Shorokhov, Vladimir G. Konakov

Abstract:

Nowadays, due to their exceptional anion conductivity at high temperatures cubic zirconia solid solutions, stabilized by rare-earth and alkaline-earth metal oxides, are widely used as a solid electrolyte (SE) materials in different electrochemical devices such as gas sensors, oxygen pumps, solid oxide fuel cells (SOFC), etc. Nowadays the intensive studies are carried out in a field of novel fully stabilized zirconia based SE development. The use of precursor powders for SE manufacturing allows predetermining the microstructure, electrical and sensor characteristics of zirconia based ceramics used as SE. Thus the goal of the present work was the investigation of the effect of precursor powder size on the electrical and sensor characteristics of fully stabilized zirconia-based solid electrolytes with compositions of 0,08Y2O3∙0,92ZrO2 (YSZ), 0,06Ce2O3∙ 0,06Y2O3∙0,88ZrO2 and 0,09Ce2O3∙0,06Y2O3-0,85ZrO2. The synthesis of precursors powders with different mean particle size was performed by sol-gel synthesis in the form of reversed co-precipitation from aqueous solutions. The cakes were washed until the neutral pH and pan-dried at 110 °С. Also, YSZ ceramics was obtained by conventional solid state synthesis including milling into a planetary mill. Then the powder was cold pressed into the pellets with a diameter of 7.2 and ~4 mm thickness at P ~16 kg/cm2 and then hydrostatically pressed. The pellets were annealed at 1600 °С for 2 hours. The phase composition of as-synthesized SE was investigated by X-Ray photoelectron spectroscopy ESCA (spectrometer ESCA-5400, PHI) X-ray diffraction analysis - XRD (Shimadzu XRD-6000). Following galvanic cell О2 (РО2(1)), Pt | SE | Pt, (РО2(2) = 0.21 atm) was used for SE sensor properties investigation. The value of РО2(1) was set by mixing of O2 and N2 in the defined proportions with the accuracy of  5%. The temperature was measured by Pt/Pt-10% Rh thermocouple, The cell electromotive force (EMF) measurement was carried out with ± 0.1 mV accuracy. During the operation at the constant temperature, reproducibility was better than 5 mV. Asymmetric potential measured for all SE appeared to be negligible. It was shown that the resistivity of YSZ ceramics decreases in about two times upon the mean agglomerates decrease from 200-250 to 40 nm. It is likely due to the both surface and bulk resistivity decrease in grains. So the overall decrease of grain size in ceramic SE results in the significant decrease of the total ceramics resistivity allowing sensor operation at lower temperatures. For the SE manufactured the estimation of oxygen ion transfer number tion was carried out in the range 600-800 °С. YSZ ceramics manufactured from powders with the mean particle size 40-140 nm, shows the highest values i.e. 0.97-0.98. SE manufactured from precursors with the mean particle size 40-140 nm shows higher sensor characteristic i.e. temperature and oxygen concentration EMF dependencies, EMF (ENernst - Ereal), tion, response time, then ceramics, manufactured by conventional solid state synthesis.

Keywords: oxygen sensors, precursor powders, sol-gel synthesis, stabilized zirconia ceramics

Procedia PDF Downloads 250
84 A Qualitative Study to Analyze Clinical Coders’ Decision Making Process of Adverse Drug Event Admissions

Authors: Nisa Mohan

Abstract:

Clinical coding is a feasible method for estimating the national prevalence of adverse drug event (ADE) admissions. However, under-coding of ADE admissions is a limitation of this method. Whilst the under-coding will impact the accurate estimation of the actual burden of ADEs, the feasibility of the coded data in estimating the adverse drug event admissions goes much further compared to the other methods. Therefore, it is necessary to know the reasons for the under-coding in order to improve the clinical coding of ADE admissions. The ability to identify the reasons for the under-coding of ADE admissions rests on understanding the decision-making process of coding ADE admissions. Hence, the current study aimed to explore the decision-making process of clinical coders when coding cases of ADE admissions. Clinical coders from different levels of coding job such as trainee, intermediate and advanced level coders were purposefully selected for the interviews. Thirteen clinical coders were recruited from two Auckland region District Health Board hospitals for the interview study. Semi-structured, one-on-one, face-to-face interviews using open-ended questions were conducted with the selected clinical coders. Interviews were about 20 to 30 minutes long and were audio-recorded with the approval of the participants. The interview data were analysed using a general inductive approach. The interviews with the clinical coders revealed that the coders have targets to meet, and they sometimes hesitate to adhere to the coding standards. Coders deviate from the standard coding processes to make a decision. Coders avoid contacting the doctors for clarifying small doubts such as ADEs and the name of the medications because of the delay in getting a reply from the doctors. They prefer to do some research themselves or take help from their seniors and colleagues for making a decision because they can avoid a long wait to get a reply from the doctors. Coders think of ADE as a small thing. Lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing may contribute to the under-coding of the ADE admissions. These findings suggest that further work is needed on interventions to improve the clinical coding of ADE admissions. Providing education to coders about the importance of ADEs, educating clinicians about the importance of clear and confirmed medical records entries, availing pharmacists’ services to improve the detection and clear documentation of ADE admissions, and including a mandatory field in the discharge summary about external causes of diseases may be useful for improving the clinical coding of ADE admissions. The findings of the research will help the policymakers to make informed decisions about the improvements. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. This country-specific research conducted in New Zealand may also benefit other countries by providing insight into the clinical coding of ADE admissions and will offer guidance about where to focus changes and improvement initiatives.

Keywords: adverse drug events, clinical coders, decision making, hospital admissions

Procedia PDF Downloads 96
83 Prediction of Endotracheal Tube Size in Children by Predicting Subglottic Diameter Using Ultrasonographic Measurement versus Traditional Formulas

Authors: Parul Jindal, Shubhi Singh, Priya Ramakrishnan, Shailender Raghuvanshi

Abstract:

Background: Knowledge of the influence of the age of the child on laryngeal dimensions is essential for all practitioners who are dealing with paediatric airway. Choosing the correct endotracheal tube (ETT) size is a crucial step in pediatric patients because a large-sized tube may cause complications like post-extubation stridor and subglottic stenosis. On the other hand with a smaller tube, there will be increased gas flow resistance, aspiration risk, poor ventilation, inaccurate monitoring of end-tidal gases and reintubation may also be required with a different size of the tracheal tube. Recent advancement in ultrasonography (USG) techniques should now allow for accurate and descriptive evaluation of pediatric airway. Aims and objectives: This study was planned to determine the accuracy of Ultrasonography (USG) to assess the appropriate ETT size and compare it with physical indices based formulae. Methods: After obtaining approval from Institute’s Ethical and Research committee, and parental written and informed consent, the study was conducted on 100 subjects of either sex between 12-60 months of age, undergoing various elective surgeries under general anesthesia requiring endotracheal intubation. The same experienced radiologist performed ultrasonography. The transverse diameter was measured at the level of cricoids cartilage by USG. After USG, general anesthesia was administered using standard techniques followed by the institute. An experienced anesthesiologist performed the endotracheal intubations with uncuffed endotracheal tube (Portex Tracheal Tube Smiths Medical India Pvt. Ltd.) with Murphy’s eye. He was unaware of the finding of the ultrasonography. The tracheal tube was considered best fit if air leak was satisfactory at 15-20 cm H₂O of airway pressure. The obtained values were compared with the values of endotracheal tube size calculated by ultrasonography, various age, height, weight-based formulas and diameter of right and left little finger. The correlation of the size of the endotracheal tube by different modalities was done and Pearson's correlation coefficient was obtained. The comparison of the mean size of the endotracheal tube by ultrasonography and by traditional formula was done by the Friedman’s test and Wilcoxon sign-rank test. Results: The predicted tube size was equal to best fit and best determined by ultrasonography (100%) followed by comparison to left little finger (98%) and right little finger (97%) and age-based formula (95%) followed by multivariate formula (83%) and body length (81%) formula. According to Pearson`s correlation, there was a moderate correlation of best fit endotracheal tube with endotracheal tube size by age-based formula (r=0.743), body length based formula (r=0.683), right little finger based formula (r=0.587), left little finger based formula (r=0.587) and multivariate formula (r=0.741). There was a strong correlation with ultrasonography (r=0.943). Ultrasonography was the most sensitive (100%) method of prediction followed by comparison to left (98%) and right (97%) little finger and age-based formula (95%), the multivariate formula had an even lesser sensitivity (83%) whereas body length based formula was least sensitive with a sensitivity of 78%. Conclusion: USG is a reliable method of estimation of subglottic diameter and for prediction of ETT size in children.

Keywords: endotracheal intubation, pediatric airway, subglottic diameter, traditional formulas, ultrasonography

Procedia PDF Downloads 218
82 Mathematical Modelling of Bacterial Growth in Products of Animal Origin in Storage and Transport: Effects of Temperature, Use of Bacteriocins and pH Level

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cordova

Abstract:

The pathogen growth in animal source foods is a common problem in the food industry, causing monetary losses due to the spoiling of products or food intoxication outbreaks in the community. In this sense, the quality of the product is reflected by the population of deteriorating agents present in it, which are mainly bacteria. The factors which are likely associated with freshness in animal source foods are temperature and processing, storage, and transport times. However, the level of deterioration of products depends, in turn, on the characteristics of the bacterial population, causing the decomposition or spoiling, such as pH level and toxins. Knowing the growth dynamics of the agents that are involved in product contamination allows the monitoring for more efficient processing. This means better quality and reasonable costs, along with a better estimation of necessary time and temperature intervals for transport and storage in order to preserve product quality. The objective of this project is to design a secondary model that allows measuring the impact on temperature bacterial growth and the competition for pH adequacy and release of bacteriocins in order to describe such phenomenon and, thus, estimate food product half-life with the least possible risk of deterioration or spoiling. In order to achieve this objective, the authors propose an analysis of a three-dimensional ordinary differential which includes; logistic bacterial growth extended by the inhibitory action of bacteriocins including the effect of the medium pH; change in the medium pH levels through an adaptation of the Luedeking-Piret kinetic model; Bacteriocin concentration modeled similarly to pH levels. These three dimensions are being influenced by the temperature at all times. Then, this differential system is expanded, taking into consideration the variable temperature and the concentration of pulsed bacteriocins, which represent characteristics inherent of the modeling, such as transport and storage, as well as the incorporation of substances that inhibit bacterial growth. The main results lead to the fact that temperature changes in an early stage of transport increased the bacterial population significantly more than if it had increased during the final stage. On the other hand, the incorporation of bacteriocins, as in other investigations, proved to be efficient in the short and medium-term since, although the population of bacteria decreased, once the bacteriocins were depleted or degraded over time, the bacteria eventually returned to their regular growth rate. The efficacy of the bacteriocins at low temperatures decreased slightly, which equates with the fact that their natural degradation rate also decreased. In summary, the implementation of the mathematical model allowed the simulation of a set of possible bacteria present in animal based products, along with their properties, in various transport and storage situations, which led us to state that for inhibiting bacterial growth, the optimum is complementary low constant temperatures and the initial use of bacteriocins.

Keywords: bacterial growth, bacteriocins, mathematical modelling, temperature

Procedia PDF Downloads 108
81 Nephroprotective Effect of Aqueous Extract of Plectranthus amboinicus (Roxb.) Leaves in Adriamycin Induced Acute Renal Failure in Wistar Rats: A Biochemical and Histopathological Assessment

Authors: Ampe Mohottige Sachinthi Sandaruwani Amarasiri, Anoja Priyadarshani Attanayake, Kamani Ayoma Perera Wijewardana Jayatilaka, Lakmini Kumari Boralugoda Mudduwa

Abstract:

The search for alternative pharmacological therapies based on natural extracts for renal failure has become an urgent need, due to paucity of effective pharmacotherapy. The current study was undertaken to evaluate the acute nephroprotective effect of aqueous leaf extract of Plectranthus amboinicus (Roxb.) (Family: Lamiaceae), a medicinal plant used in traditional Ayurvedic medicine for the management of renal diseases in Sri Lanka. The study was performed in adriamycin (ADR) induced nephrotoxic in Wistar rats. Wistar rats were randomly divided into four groups each with six rats. A single dose of ADR (20 mg/kg body wt., ip) was used for the induction of nephrotoxicity in all groups of rats except group one. The treatments were started 24 hours after induction of nephrotoxicity and continued for three days. Group one and two served as healthy and nephrotoxic control rats and were administered equivalent volumes of normal saline (0.9% NaCl) orally. Group three and four nephrotoxic rats were administered the lyophilized powder of the aqueous extract of P. amboinicus (400 mg/ kg body wt.; equivalent human therapeutic dose) and the standard drug, fosinopril sodium (0.09 mg/ kg body wt.) respectively. Urine and blood samples were collected from rats in each group at the end of the period of intervention for the estimation of selected renal parameters. H and E stained sections of the kidney tissues were examined for histopathological changes. Rats treated with the plant extract showed significant improvement in biochemical parameters and histopathological changes compared to ADR induced nephrotoxic group. The elevation of serum concentrations of creatinine and β2-microglobulin were decreased by 38%, and 66% in plant extract treated nephrotoxic rats respectively (p < 0.05). In addition, serum concentrations of total protein and albumin were significantly increased by 25% and 14% in rats treated with P. amboinicus respectively (p < 0.05). The results of β2 –microglobulin and serum total protein demonstrated a significant reduction in the elevated values in rats administered with the plant extract (400 mg/kg) compared to that of fosinopril (0.09 mg/kg). Urinary protein loss in 24hr urine samples was significantly decreased in rats treated with both fosinopril (86%) and P. ambonicus (56%) at the end of the intervention (p < 0.01). Accordingly, an attenuation of morphological destruction was observed in the H and E stained sections of the kidney with the treatments of plant extract and fosinopril. The results of the present study revealed that the aqueous leaf extract of P. amboinicus possesses significant nephroprotective activity at the equivalent therapeutic dose of 400 mg/ kg against adriamycin induced acute nephrotoxicity.

Keywords: biochemical assessment, histopathological assessment, nephroprotective activity, Plectranthus amboinicus

Procedia PDF Downloads 114
80 Motives for Reshoring from China to Europe: A Hierarchical Classification of Companies

Authors: Fabienne Fel, Eric Griette

Abstract:

Reshoring, whether concerning back-reshoring or near-reshoring, is a quite recent phenomenon. Despite the economic and political interest of this topic, academic research questioning determinants of reshoring remains rare. Our paper aims at contributing to fill this gap. In order to better understand the reasons for reshoring, we conducted a study among 280 French firms during spring 2016, three-quarters of which sourced, or source, in China. 105 firms in the sample have reshored all or part of their Chinese production or supply in recent years, and we aimed to establish a typology of the motives that drove them to this decision. We asked our respondents about the history of their Chinese supplies, their current reshoring strategies, and their motivations. Statistical analysis was performed with SPSS 22 and SPAD 8. Our results show that change in commercial and financial terms with China is the first motive explaining the current reshoring movement from this country (it applies to 54% of our respondents). A change in corporate strategy is the second motive (30% of our respondents); the reshoring decision follows a change in companies’ strategies (upgrading, implementation of a CSR policy, or a 'lean management' strategy). The third motive (14% of our sample) is a mere correction of the initial offshoring decision, considered as a mistake (under-estimation of hidden costs, non-quality and non-responsiveness problems). Some authors emphasize that developing a short supply chain, involving geographic proximity between design and production, gives a competitive advantage to companies wishing to offer innovative products. Admittedly 40% of our respondents indicate that this motive could have played a part in their decision to reshore, but this reason was not enough for any of them and is not an intrinsic motive leading to leaving Chinese suppliers. Having questioned our respondents about the importance given to various problems leading them to reshore, we then performed a Principal Components Analysis (PCA), associated with an Ascending Hierarchical Classification (AHC), based on Ward criterion, so as to point out more specific motivations. Three main classes of companies should be distinguished: -The 'Cost Killers' (23% of the sample), which reshore their supplies from China only because of higher procurement costs and so as to find lower costs elsewhere. -The 'Realists' (50% of the sample), giving equal weight or importance to increasing procurement costs in China and to the quality of their supplies (to a large extend). Companies being part of this class tend to take advantage of this changing environment to change their procurement strategy, seeking suppliers offering better quality and responsiveness. - The 'Voluntarists' (26% of the sample), which choose to reshore their Chinese supplies regardless of higher Chinese costs, to obtain better quality and greater responsiveness. We emphasize that if the main driver for reshoring from China is indeed higher local costs, it is should not be regarded as an exclusive motivation; 77% of the companies in the sample, are also seeking, sometimes exclusively, more reactive suppliers, liable to quality, respect for the environment and intellectual property.

Keywords: China, procurement, reshoring, strategy, supplies

Procedia PDF Downloads 302
79 Estimation of Particle Number and Mass Doses Inhaled in a Busy Street in Lublin, Poland

Authors: Bernard Polednik, Adam Piotrowicz, Lukasz Guz, Marzenna Dudzinska

Abstract:

Transportation is considered to be responsible for increased exposure of road users – i.e., drivers, car passengers, and pedestrians as well as inhabitants of houses located near roads - to pollutants emitted from vehicles. Accurate estimates are, however, difficult as exposure depends on many factors such as traffic intensity or type of fuel as well as the topography and the built-up area around the individual routes. The season and weather conditions are also of importance. In the case of inhabitants of houses located near roads, their exposure depends on the distance from the road, window tightness and other factors that decrease pollutant infiltration. This work reports the variations of particle concentrations along a selected road in Lublin, Poland. Their impact on the exposure for road users as well as for inhabitants of houses located near the road is also presented. Mobile and fixed-site measurements were carried out in peak (around 8 a.m. and 4 p.m.) and off-peak (12 a.m., 4 a.m., and 12 p.m.) traffic times in all 4 seasons. Fixed-site measurements were performed in 12 measurement points along the route. The number and mass concentration of particles was determined with the use of P-Trak model 8525, OPS 3330, DustTrak DRX model 8533 (TSI Inc. USA) and Grimm Aerosol Spectrometer 1.109 with Nano Sizer 1.321 (Grimm Aerosol Germany). The obtained results indicated that the highest concentrations of traffic-related pollution were measured near 4-way traffic intersections during peak hours in the autumn and winter. The highest average number concentration of ultrafine particles (PN0.1), and mass concentration of fine particles (PM2.5) in fixed-site measurements were obtained in the autumn and amounted to 23.6 ± 9.2×10³ pt/cm³ and 135.1 ± 11.3 µg/m³, respectively. The highest average number concentration of submicrometer particles (PN1) was measured in the winter and amounted to 68 ± 26.8×10³ pt/cm³. The estimated doses of particles deposited in the commuters’ and pedestrians’ lungs within an hour near 4-way TIs in peak hours in the summer amounted to 4.3 ± 3.3×10⁹ pt/h (PN0.1) and 2.9 ± 1.4 µg/h (PM2.5) and 3.9 ± 1.1×10⁹ pt/h (PN0.1) or 2.5 ± 0.4 µg/h (PM2.5), respectively. While estimating the doses inhaled by the inhabitants of premises located near the road one should take into account different fractional penetration of particles from outdoors to indoors. Such doses assessed for the autumn and winter are up to twice as high as the doses inhaled by commuters and pedestrians in the summer. In the winter traffic-related ultrafine particles account for over 70% of all ultrafine particles deposited in the pedestrians’ lungs. The share of traffic-related PM10 particles was estimated at approximately 33.5%. Concluding, the results of the particle concentration measurements along a road in Lublin indicated that the concentration is mainly affected by the traffic intensity and weather conditions. Further detailed research should focus on how the season and the metrological conditions affect concentration levels of traffic-related pollutants and the exposure of commuters and pedestrians as well as the inhabitants of houses located near traffic routes.

Keywords: air quality, deposition dose, health effects, vehicle emissions

Procedia PDF Downloads 75