Search results for: budget distribution
465 Optimal Uses of Rainwater to Maintain Water Level in Gomti Nagar, Uttar Pradesh, India
Authors: Alok Saini, Rajkumar Ghosh
Abstract:
Water is nature's important resource for survival of all living things, but freshwater scarcity exists in some parts of world. This study has predicted that Gomti Nagar area (49.2 sq. km.) will harvest about 91110 ML of rainwater till 2051 (assuming constant and present annual rainfall). But 17.71 ML of rainwater was harvested from only 53 buildings in Gomti Nagar area in the year 2021. Water level will be increased (rise) by 13 cm in Gomti Nagar from such groundwater recharge. The total annual groundwater abstraction from Gomti Nagar area was 35332 ML (in 2021). Due to hydrogeological constraints and lower annual rainfall, groundwater recharge is less than groundwater abstraction. The recent scenario is only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. Gomti Nagar is situated in 'Zone–A' (water distribution area) and groundwater is the primary source of freshwater supply. Current scenario indicates only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. In Gomti Nagar, the difference between groundwater abstraction and recharge will be 735570 ML in 30 yrs. Statistically, all buildings at Gomti Nagar (new and renovated) could harvest 3037 ML of rainwater through RTRWHs annually. The most recent monsoonal recharge in Gomti Nagar was 10813 ML/yr. Harvested rainwater collected from RTRWHs can be used for rooftop irrigation, and residential kitchen and gardens (home grown fruit and vegetables). According to bylaws, RTRWH installations are required in both newly constructed and existing buildings plot areas of 300 sq. m or above. Harvested rainwater is of higher quality than contaminated groundwater. Harvested rainwater from RTRWHs can be considered water self-sufficient. Rooftop Rainwater Harvesting Systems (RTRWHs) are least expensive, eco-friendly, most sustainable, and alternative water resource for artificial recharge. This study also predicts about 3.9 m of water level rise in Gomti Nagar area till 2051, only when all buildings will install RTRWHs and harvest for groundwater recharging. As a result, this current study responds to an impact assessment study of RTRWHs implementation for the water scarcity problem in the Gomti Nagar area (1.36 sq.km.). This study suggests that common storage tanks (recharge wells) should be built for a group of at least ten (10) households and optimal amount of harvested rainwater will be stored annually. Artificial recharge from alternative water sources will be required to improve the declining water level trend and balance the groundwater table in this area. This over-exploitation of groundwater may lead to land subsidence, and development of vertical cracks.Keywords: aquifer, aquitard, artificial recharge, bylaws, groundwater, monsoon, rainfall, rooftop rainwater harvesting system, RTRWHs water table, water level
Procedia PDF Downloads 97464 Monitoring of Serological Test of Blood Serum in Indicator Groups of the Population of Central Kazakhstan
Authors: Praskovya Britskaya, Fatima Shaizadina, Alua Omarova, Nessipkul Alysheva
Abstract:
Planned preventive vaccination, which is carried out in the Republic of Kazakhstan, promoted permanent decrease in the incidence of measles and viral hepatitis B. In the structure of VHB patients prevail people of young, working age. Monitoring of infectious incidence, monitoring of coverage of immunization of the population, random serological control over the immunity enable well-timed identification of distribution of the activator, effectiveness of the taken measures and forecasting. The serological blood analysis was conducted in indicator groups of the population of Central Kazakhstan for the purpose of identification of antibody titre for vaccine preventable infections (measles, viral hepatitis B). Measles antibodies were defined by method of enzyme-linked assay (ELA) with test-systems "VektoKor" – Ig G ('Vektor-Best' JSC). Antibodies for HBs-antigen of hepatitis B virus in blood serum was identified by method of enzyme-linked assay (ELA) with VektoHBsAg test systems – antibodies ('Vektor-Best' JSC). The result of the analysis is positive, the concentration of IgG to measles virus in the studied sample is equal to 0.18 IU/ml or more. Protective level of concentration of anti-HBsAg makes 10 mIU/ml. The results of the study of postvaccinal measles immunity showed that the share of seropositive people made 87.7% of total number of surveyed. The level of postvaccinal immunity to measles in age groups differs. So, among people older than 56 the percentage of seropositive made 95.2%. Among people aged 15-25 were registered 87.0% seropositive, at the age of 36-45 – 86.6%. In age groups of 25-35 and 36-45 the share of seropositive people was approximately at the same level – 88.5% and 88.8% respectively. The share of people seronegative to a measles virus made 12.3%. The biggest share of seronegative people was found among people aged 36-45 – 13.4% and 15-25 – 13.0%. The analysis of results of the examined people for the existence of postvaccinal immunity to viral hepatitis B showed that from all surveyed only 33.5% have the protective level of concentration of anti-HBsAg of 10 mIU/ml and more. The biggest share of people protected from VHB virus is observed in the age group of 36-45 and makes 60%. In the indicator group – above 56 – seropositive people made 4.8%. The high percentage of seronegative people has been observed in all studied age groups from 40.0% to 95.2%. The group of people which is least protected from getting VHB is people above 56 (95.2%). The probability to get VHB is also high among young people aged 25-35, the percentage of seronegative people made 80%. Thus, the results of the conducted research testify to the need for carrying out serological monitoring of postvaccinal immunity for the purpose of operational assessment of the epidemiological situation, early identification of its changes and prediction of the approaching danger.Keywords: antibodies, blood serum, immunity, immunoglobulin
Procedia PDF Downloads 255463 Development of a Framework for Assessing Public Health Risk Due to Pluvial Flooding: A Case Study of Sukhumvit, Bangkok
Authors: Pratima Pokharel
Abstract:
When sewer overflow due to rainfall in urban areas, this leads to public health risks when an individual is exposed to that contaminated floodwater. Nevertheless, it is still unclear the extent to which the infections pose a risk to public health. This study analyzed reported diarrheal cases by month and age in Bangkok, Thailand. The results showed that the cases are reported higher in the wet season than in the dry season. It was also found that in Bangkok, the probability of infection with diarrheal diseases in the wet season is higher for the age group between 15 to 44. However, the probability of infection is highest for kids under 5 years, but they are not influenced by wet weather. Further, this study introduced a vulnerability that leads to health risks from urban flooding. This study has found some vulnerability variables that contribute to health risks from flooding. Thus, for vulnerability analysis, the study has chosen two variables, economic status, and age, that contribute to health risk. Assuming that the people's economic status depends on the types of houses they are living in, the study shows the spatial distribution of economic status in the vulnerability maps. The vulnerability map result shows that people living in Sukhumvit have low vulnerability to health risks with respect to the types of houses they are living in. In addition, from age the probability of infection of diarrhea was analyzed. Moreover, a field survey was carried out to validate the vulnerability of people. It showed that health vulnerability depends on economic status, income level, and education. The result depicts that people with low income and poor living conditions are more vulnerable to health risks. Further, the study also carried out 1D Hydrodynamic Advection-Dispersion modelling with 2-year rainfall events to simulate the dispersion of fecal coliform concentration in the drainage network as well as 1D/2D Hydrodynamic model to simulate the overland flow. The 1D result represents higher concentrations for dry weather flows and a large dilution of concentration on the commencement of a rainfall event, resulting in a drop of the concentration due to runoff generated after rainfall, whereas the model produced flood depth, flood duration, and fecal coliform concentration maps, which were transferred to ArcGIS to produce hazard and risk maps. In addition, the study also simulates the 5-year and 10-year rainfall simulations to show the variation in health hazards and risks. It was found that even though the hazard coverage is very high with a 10-year rainfall events among three rainfall events, the risk was observed to be the same with a 5-year and 10-year rainfall events.Keywords: urban flooding, risk, hazard, vulnerability, health risk, framework
Procedia PDF Downloads 75462 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China
Abstract:
With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.Keywords: landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture
Procedia PDF Downloads 165461 Governance of Social Media Using the Principles of Community Radio
Authors: Ken Zakreski
Abstract:
Regulating Canadian Facebook Groups, of a size and type, when they reach a threshold of audio video content. Consider the evolution of the Streaming Act, Parl GC Bill C-11 (44-1) and the regulations that will certainly follow. The Canadian Heritage Minister's office stipulates, "the Broadcasting Act only applies to audio and audiovisual content, not written journalism.” Governance— After 10 years, a community radio station for Gabriola Island, BC – Canadian Radio-television and Telecommunications Commission (“CRTC”) was approved but never started – became a Facebook Group “Community Bulletin Board - Life on Gabriola“ referred to as CBBlog. After CBBlog started and began to gather real traction, a member of the Group cloned the membership and ran their competing Facebook group under the banner of "free speech”. Here we see an inflection point [change of cultural stewardship] with two different mathematical results [engagement and membership growth]. Canada's telecommunication history of “portability” and “interoperability” made that Facebook Group CBBlog the better option, over broadcast FM radio for a community pandemic information sharing service for Gabriola Island, BC. A culture of ignorance flourishes in social media. Often people do not understand their own experience, or the experience of others because they do not have the concepts needed for understanding. It is thus important they are not denied concepts required for their full understanding. For example, Legislators need to know something about gay culture before they can make any decisions about it. Community Media policies and CRTC regulations are known and regulators can use that history to forge forward with regulations for internet platforms of a size and content type that reach a threshold of audio / video content. Mostly volunteer run media services, provide order of magnitude lower costs over commercial media. (Treating) Facebook Groups as new media.? Cathy Edwards, executive director of the Canadian Association of Community Television Users and Stations (“CACTUS”), calls it new media in that the distribution platform is not the issue. What does make community groups community media? Cathy responded, "... it's bylaws, articles of incorporation that state they are community media, they have accessibility, commitments to skills training, any member of the community can be a member, and there is accountability to a board of directors". Eligibility for funding through CACTUS requires these same commitments. It is risky for a community to invest into a platform as ownership has not been litigated. Is a FaceBook Group an asset of a not for profit society? The memo, from law student, Jared Hubbard summarizes, “Rights and interests in a Facebook group could, in theory, be transferred as property... This theory is currently unconfirmed by Canadian courts. “Keywords: social media, governance, community media, Canadian radio
Procedia PDF Downloads 70460 Examining Historically Defined Periods in Autobiographical Memories for Transitional Events
Authors: Khadeeja Munawar, Shamsul Haque
Abstract:
We examined the plausibility of transition theory suggesting that memories of transitional events, which give rise to a significant and persistent change in the fabric of daily life, are organized around the historically defined autobiographical periods (H-DAPs). 141 Pakistani older adults retrieved 10 autobiographical memories (AMs) each to 10 cue words. As the history of Pakistan is dominated by various political and nationwide transitional events, it was expected that the participants would recall memories with H-DAPs references. The content analysis revealed that 0.7% of memories had H-DAP references and 0.4% memories mentioned major transitional events such as War/Natural Disaster. There was a vivid reminiscence bump between 10 - 20 years of age in lifespan distribution of AMs. There were 67.9% social-focused AMs. Significantly more self-focused memories were reported by individuals who endorsed themselves as conservatives. Only a few H-DAPs were reported, although the history of Pakistan was dominated by numerous political, historical and nationwide transitional events. Memories within and outside of the bump period were mostly positive. The participants rarely used historical/political or nationwide significant events or periods to date the memories elicited. The intense and nationwide (as well as region-wise) significant historical/political events spawned across decades in the lives of participants of the present study but these events did not produce H-DAPs. The findings contradicted the previous studies on H-DAPs and transition theory. The dominance of social-focused AMs in the present study is in line with the past studies comparing the memories of collectivist and individualist cultures (i.e., European Americans vs. Asian, African and Latin-American cultures). The past empirical evidence shows that conservative values and beliefs are adopted as a coping strategy to feel secure in the face of danger when future is dominated with uncertainty and to connect to likeminded others. In the present study, conservative political ideology is somehow assisting the participants in living a stable life midst of their complex social worlds. The reminiscence bump, as well as dominance of positive memories within and outside the bump period, are in line with the narrative/identity account which states that the events and experiences during adolescence and early adulthood assimilate into a person’s lifelong narratives. Hence these events are used as identity markers and are more easily recalled later in life. Also, according to socioemotional theory and the positivity effect, the participants evaluated past events more positively as they grow up and the intensity of negative emotions decreased with time.Keywords: autobiographical memory, historically defined autobiographical periods, narrative/identity account, Pakistan, reminiscence bump, SMS framework, transition theory
Procedia PDF Downloads 232459 The Effects of Stoke's Drag, Electrostatic Force and Charge on Penetration of Nanoparticles through N95 Respirators
Authors: Jacob Schwartz, Maxim Durach, Aniruddha Mitra, Abbas Rashidi, Glen Sage, Atin Adhikari
Abstract:
NIOSH (National Institute for Occupational Safety and Health) approved N95 respirators are commonly used by workers in construction sites where there is a large amount of dust being produced from sawing, grinding, blasting, welding, etc., both electrostatically charged and not. A significant portion of airborne particles in construction sites could be nanoparticles created beside coarse particles. The penetration of the particles through the masks may differ depending on the size and charge of the individual particle. In field experiments relevant to this current study, we found that nanoparticles of medium size ranges are penetrating more frequently than nanoparticles of smaller and larger sizes. For example, penetration percentages of nanoparticles of 11.5 – 27.4 nm into a sealed N95 respirator on a manikin head ranged from 0.59 to 6.59%, whereas nanoparticles of 36.5 – 86.6 nm ranged from 7.34 to 16.04%. The possible causes behind this increased penetration of mid-size nanoparticles through mask filters are not yet explored. The objective of this study is to identify causes behind this unusual behavior of mid-size nanoparticles. We have considered such physical factors as Boltzmann distribution of the particles in thermal equilibrium with the air, kinetic energy of the particles at impact on the mask, Stoke’s drag force, and electrostatic forces in the mask stopping the particles. When the particles collide with the mask, only the particles that have enough kinetic energy to overcome the energy loss due to the electrostatic forces and the Stokes’ drag in the mask can pass through the mask. To understand this process, the following assumptions were made: (1) the effect of Stoke’s drag depends on the particles’ velocity at entry into the mask; (2) the electrostatic force is proportional to the charge on the particles, which in turn is proportional to the surface area of the particles; (3) the general dependence on electrostatic charge and thickness means that for stronger electrostatic resistance in the masks and thicker the masks’ fiber layers the penetration of particles is reduced, which is a sensible conclusion. In sampling situations where one mask was soaked in alcohol eliminating electrostatic interaction the penetration was much larger in the mid-range than the same mask with electrostatic interaction. The smaller nanoparticles showed almost zero penetration most likely because of the small kinetic energy, while the larger sized nanoparticles showed almost negligible penetration most likely due to the interaction of the particle with its own drag force. If there is no electrostatic force the fraction for larger particles grows. But if the electrostatic force is added the fraction for larger particles goes down, so diminished penetration for larger particles should be due to increased electrostatic repulsion, may be due to increased surface area and therefore larger charge on average. We have also explored the effect of ambient temperature on nanoparticle penetrations and determined that the dependence of the penetration of particles on the temperature is weak in the range of temperatures in the measurements 37-42°C, since the factor changes in the range from 3.17 10-3K-1 to 3.22 10-3K-1.Keywords: respiratory protection, industrial hygiene, aerosol, electrostatic force
Procedia PDF Downloads 194458 Comparative Analysis of the Expansion Rate and Soil Erodibility Factor (K) of Some Gullies in Nnewi and Nnobi, Anambra State Southeastern Nigeria
Authors: Nzereogu Stella Kosi, Igwe Ogbonnaya, Emeh Chukwuebuka Odinaka
Abstract:
A comparative analysis of the expansion rate and soil erodibility of some gullies in Nnewi and Nnobi both of Nanka Formation were studied. The study involved an integration of field observations, geotechnical analysis, slope stability analysis, multivariate statistical analysis, gully expansion rate analysis, and determination of the soil erodibility factor (K) from Revised Universal Soil Loss Equation (RUSLE). Fifteen representative gullies were studied extensively, and results reveal that the geotechnical properties of the soil, topography, vegetation cover, rainfall intensity, and the anthropogenic activities in the study area were major factors propagating and influencing the erodibility of the soils. The specific gravity of the soils ranged from 2.45-2.66 and 2.54-2.78 for Nnewi and Nnobi, respectively. Grain size distribution analysis revealed that the soils are composed of gravel (5.77-17.67%), sand (79.90-91.01%), and fines (2.36-4.05%) for Nnewi and gravel (7.01-13.65%), sand (82.47-88.67%), and fines (3.78-5.02%) for Nnobi. The soils are moderately permeable with values ranging from 2.92 x 10-5 - 6.80 x 10-4 m/sec and 2.35 x 10-6 - 3.84 x 10⁻⁴m/sec for Nnewi and Nnobi respectively. All have low cohesion values ranging from 1–5kPa and 2-5kPa and internal friction angle ranging from 29-38° and 30-34° for Nnewi and Nnobi, respectively, which suggests that the soils have low shear strength and are susceptible to shear failure. Furthermore, the compaction test revealed that the soils were loose and easily erodible with values of maximum dry density (MDD) and optimum moisture content (OMC) ranging from 1.82-2.11g/cm³ and 8.20-17.81% for Nnewi and 1.98-2.13g/cm³ and 6.00-17.80% respectively. The plasticity index (PI) of the fines showed that they are nonplastic to low plastic soils and highly liquefiable with values ranging from 0-10% and 0-9% for Nnewi and Nnobi, respectively. Multivariate statistical analyses were used to establish relationship among the determined parameters. Slope stability analysis gave factor of safety (FoS) values in the range of 0.50-0.76 and 0.82-0.95 for saturated condition and 0.73-0.98 and 0.87-1.04 for unsaturated condition for both Nnewi and Nnobi, respectively indicating that the slopes are generally unstable to critically stable. The erosion expansion rate analysis for a fifteen-year period (2005-2020) revealed an average longitudinal expansion rate of 36.05m/yr, 10.76m/yr, and 183m/yr for Nnewi, Nnobi, and Nanka type gullies, respectively. The soil erodibility factor (K) are 8.57x10⁻² and 1.62x10-4 for Nnewi and Nnobi, respectively, indicating that the soils in Nnewi have higher erodibility potentials than those of Nnobi. From the study, both the Nnewi and Nnobi areas are highly prone to erosion. However, based on the relatively lower fine content of the soil, relatively lower topography, steeper slope angle, and sparsely vegetated terrain in Nnewi, soil erodibility and gully intensity are more profound in Nnewi than Nnobi.Keywords: soil erodibility, gully expansion, nnewi-nnobi, slope stability, factor of safety
Procedia PDF Downloads 130457 Grain Size Statistics and Depositional Pattern of the Ecca Group Sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa
Authors: Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava
Abstract:
Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicates the dominance of low energy environment. The bivariate plots that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function (LDF) analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are fluvial (deltaic) deposits. The graphic mean value shows the dominance of fine sand-size particles, which point to relatively low energy conditions of deposition. In addition, the LDF results point to low energy conditions during the deposition of the Prince Albert, Collingham and part of the Ripon Formation (Pluto Vale and Wonderfontein Shale Members), whereas the Trumpeters Member of the Ripon Formation and the overlying Fort Brown Formation accumulated under high energy conditions. The CM pattern shows a clustered distribution of sediments in the PQ and QR segments, indicating that the sediments were deposited mostly by suspension and rolling/saltation, and graded suspension. Furthermore, the plots also show that the sediments are mainly deposited by turbidity currents. Visher diagrams show the variability of hydraulic depositional conditions for the Permian Ecca Group sandstones. Saltation is the major process of transportation, although suspension and traction also played some role during deposition of the sediments. The sediments were mainly in saltation and suspension before being deposited.Keywords: grain size analysis, hydrodynamic condition, depositional environment, Ecca Group, South Africa
Procedia PDF Downloads 481456 Dual-Layer Microporous Layer of Gas Diffusion Layer for Proton Exchange Membrane Fuel Cells under Various RH Conditions
Authors: Grigoria Athanasaki, Veerarajan Vimala, A. M. Kannan, Louis Cindrella
Abstract:
Energy usage has been increased throughout the years, leading to severe environmental impacts. Since the majority of the energy is currently produced from fossil fuels, there is a global need for clean energy solutions. Proton Exchange Membrane Fuel Cells (PEMFCs) offer a very promising solution for transportation applications because of their solid configuration and low temperature operations, which allows them to start quickly. One of the main components of PEMFCs is the Gas Diffusion Layer (GDL), which manages water and gas transport and shows direct influence on the fuel cell performance. In this work, a novel dual-layer GDL with gradient porosity was prepared, using polyethylene glycol (PEG) as pore former, to improve the gas diffusion and water management in the system. The microporous layer (MPL) of the fabricated GDL consists of carbon powder PUREBLACK, sodium dodecyl sulfate as a surfactant, 34% wt. PTFE and the gradient porosity was created by applying one layer using 30% wt. PEG on the carbon substrate, followed by a second layer without using any pore former. The total carbon loading of the microporous layer is ~ 3 mg.cm-2. For the assembly of the catalyst layer, Nafion membrane (Ion Power, Nafion Membrane NR211) and Pt/C electrocatalyst (46.1% wt.) were used. The catalyst ink was deposited on the membrane via microspraying technique. The Pt loading is ~ 0.4 mg.cm-2, and the active area is 5 cm2. The sample was ex-situ characterized via wetting angle measurement, Scanning Electron Microscopy (SEM), and Pore Size Distribution (PSD) to evaluate its characteristics. Furthermore, for the performance evaluation in-situ characterization via Fuel Cell Testing using H2/O2 and H2/air as reactants, under 50, 60, 80, and 100% relative humidity (RH), took place. The results were compared to a single layer GDL, fabricated with the same carbon powder and loading as the dual layer GDL, and a commercially available GDL with MPL (AvCarb2120). The findings reveal high hydrophobic properties of the microporous layer of the GDL for both PUREBLACK based samples, while the commercial GDL demonstrates hydrophilic behavior. The dual layer GDL shows high and stable fuel cell performance under all the RH conditions, whereas the single layer manifests a drop in performance at high RH in both oxygen and air, caused by catalyst flooding. The commercial GDL shows very low and unstable performance, possibly because of its hydrophilic character and thinner microporous layer. In conclusion, the dual layer GDL with PEG appears to have improved gas diffusion and water management in the fuel cell system. Due to its increasing porosity from the catalyst layer to the carbon substrate, it allows easier access of the reactant gases from the flow channels to the catalyst layer, and more efficient water removal from the catalyst layer, leading to higher performance and stability.Keywords: gas diffusion layer, microporous layer, proton exchange membrane fuel cells, relative humidity
Procedia PDF Downloads 124455 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products
Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis
Abstract:
The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem
Procedia PDF Downloads 257454 Approximate Spring Balancing for the Arm of a Humanoid Robot to Reduce Actuator Torque
Authors: Apurva Patil, Ashay Aswale, Akshay Kulkarni, Shubham Bharadiya
Abstract:
The potential benefit of gravity compensation of linkages in mechanisms using springs to reduce actuator requirements is well recognized, but practical applications have been elusive. Although existing methods provide exact spring balance, they require additional masses or auxiliary links, or all the springs used originate from the ground, which makes the resulting device bulky and space-inefficient. This paper uses a method of static balancing of mechanisms with conservative loads such as gravity and spring loads using non-zero-free-length springs with child–parent connections and no auxiliary links. Application of this method to the developed arm of a humanoid robot is presented here. Spring balancing is particularly important in this case because the serial chain of linkages has to work against gravity.This work involves approximate spring balancing of the open-loop chain of linkages using minimization of potential energy variance. It uses the approach of flattening the potential energy distribution over the workspace and fuses it with numerical optimization. The results show the considerable reduction in actuator torque requirement with practical spring design and arrangement. Reduced actuator torque facilitates the use of lower end actuators which are generally smaller in weight and volume thereby lowering the space requirements and the total weight of the arm. This is particularly important for humanoid robots where the parent actuator has to handle the weight of the subsequent actuators as well. Actuators with lower actuation requirements are more energy efficient, thereby reduce the energy consumption of the mechanism. Lower end actuators are lower in cost and facilitate the development of low-cost devices. Although the method provides only an approximate balancing, it is versatile, flexible in choosing appropriate control variables that are relevant to the design problem and easy to implement. The true potential of this technique lies in the fact that it uses a very simple optimization to find the spring constant, free-length of the spring and the optimal attachment points subject to the optimization constraints. Also, it uses physically realizable non-zero-free-length springs directly, thereby reducing the complexity involved in simulating zero-free-length springs from non-zero-free-length springs. This method allows springs to be attached to the preceding parent link, which makes the implementation of spring balancing practical. Because auxiliary linkages can be avoided, the resultant arm of the humanoid robot is compact. The cost benefits and reduced complexity can be significant advantages in the development of this arm of the humanoid robot.Keywords: actuator torque, child-parent connections, spring balancing, the arm of a humanoid robot
Procedia PDF Downloads 244453 Use of Cassava Waste and Its Energy Potential
Authors: I. Inuaeyen, L. Phil, O. Eni
Abstract:
Fossil fuels have been the main source of global energy for many decades, accounting for about 80% of global energy need. This is beginning to change however with increasing concern about greenhouse gas emissions which comes mostly from fossil fuel combustion. Greenhouse gases such as carbon dioxide are responsible for stimulating climate change. As a result, there has been shift towards more clean and renewable energy sources of energy as a strategy for stemming greenhouse gas emission into the atmosphere. The production of bio-products such as bio-fuel, bio-electricity, bio-chemicals, and bio-heat etc. using biomass materials in accordance with the bio-refinery concept holds a great potential for reducing high dependence on fossil fuel and their resources. The bio-refinery concept promotes efficient utilisation of biomass material for the simultaneous production of a variety of products in order to minimize or eliminate waste materials. This will ultimately reduce greenhouse gas emissions into the environment. In Nigeria, cassava solid waste from cassava processing facilities has been identified as a vital feedstock for bio-refinery process. Cassava is generally a staple food in Nigeria and one of the most widely cultivated foodstuff by farmers across Nigeria. As a result, there is an abundant supply of cassava waste in Nigeria. In this study, the aim is to explore opportunities for converting cassava waste to a range of bio-products such as butanol, ethanol, electricity, heat, methanol, furfural etc. using a combination of biochemical, thermochemical and chemical conversion routes. . The best process scenario will be identified through the evaluation of economic analysis, energy efficiency, life cycle analysis and social impact. The study will be carried out by developing a model representing different process options for cassava waste conversion to useful products. The model will be developed using Aspen Plus process simulation software. Process economic analysis will be done using Aspen Icarus software. So far, comprehensive survey of literature has been conducted. This includes studies on conversion of cassava solid waste to a variety of bio-products using different conversion techniques, cassava waste production in Nigeria, modelling and simulation of waste conversion to useful products among others. Also, statistical distribution of cassava solid waste production in Nigeria has been established and key literatures with useful parameters for developing different cassava waste conversion process has been identified. In the future work, detailed modelling of the different process scenarios will be carried out and the models validated using data from literature and demonstration plants. A techno-economic comparison of the various process scenarios will be carried out to identify the best scenario using process economics, life cycle analysis, energy efficiency and social impact as the performance indexes.Keywords: bio-refinery, cassava waste, energy, process modelling
Procedia PDF Downloads 374452 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms
Authors: Dimitrios Kafetzopoulos
Abstract:
Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.Keywords: incremental operational changes, radical operational changes, efficiency, sustainability
Procedia PDF Downloads 136451 Data Envelopment Analysis of Allocative Efficiency among Small-Scale Tuber Crop Farmers in North-Central, Nigeria
Authors: Akindele Ojo, Olanike Ojo, Agatha Oseghale
Abstract:
The empirical study examined the allocative efficiency of small holder tuber crop farmers in North central, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 300 randomly selected tuber crop farmers from the study area. Descriptive statistics, data envelopment analysis and Tobit regression model were used to analyze the data. The DEA result on the classification of the farmers into efficient and inefficient farmers showed that 17.67% of the sampled tuber crop farmers in the study area were operating at frontier and optimum level of production with mean allocative efficiency of 1.00. This shows that 82.33% of the farmers in the study area can still improve on their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Tobit model for factors influencing allocative inefficiency in the study area showed that as the year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size increased in the study area, the allocative inefficiency of the farmers decreased. The results on effects of the significant determinants of allocative inefficiency at various distribution levels revealed that allocative efficiency increased from 22% to 34% as the farmer acquired more farming experience. The allocative efficiency index of farmers that belonged to cooperative society was 0.23 while their counterparts without cooperative society had index value of 0.21. The result also showed that allocative efficiency increased from 0.43 as farmer acquired high formal education and decreased to 0.16 with farmers with non-formal education. The efficiency level in the allocation of resources increased with more contact with extension services as the allocative efficeincy index increased from 0.16 to 0.31 with frequency of extension contact increasing from zero contact to maximum of twenty contacts per annum. These results confirm that increase in year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size leads to increases efficiency. The results further show that the age of the farmers had 32% input to the efficiency but reduces to an average of 15%, as the farmer grows old. It is therefore recommended that enhanced research, extension delivery and farm advisory services should be put in place for farmers who did not attain optimum frontier level to learn how to attain the remaining 74.39% level of allocative efficiency through a better production practices from the robustly efficient farms. This will go a long way to increase the efficiency level of the farmers in the study area.Keywords: allocative efficiency, DEA, Tobit regression, tuber crop
Procedia PDF Downloads 289450 Evaluation of Role of Surgery in Management of Pediatric Germ Cell Tumors According to Risk Adapted Therapy Protocols
Authors: Ahmed Abdallatif
Abstract:
Background: Patients with malignant germ cell tumors have age distribution in two peaks, with the first one during infancy and the second after the onset of puberty. Gonadal germ cell tumors are the most common malignant ovarian tumor in females aged below twenty years. Sacrococcygeal and retroperitoneal abdominal tumors usually presents in a large size before the onset of symptoms. Methods: Patients with pediatric germ cell tumors presenting to Children’s Cancer Hospital Egypt and National Cancer Institute Egypt from January 2008 to June 2011 Patients underwent stratification according to risk into low, intermediate and high risk groups according to children oncology group classification. Objectives: Assessment of the clinicopathologic features of all cases of pediatric germ cell tumors and classification of malignant cases according to their stage, and the primary site to low, intermediate and high risk patients. Evaluation of surgical management in each group of patients focusing on surgical approach, the extent of surgical resection according to each site, ability to achieve complete surgical resection and perioperative complications. Finally, determination of the three years overall and disease-free survival in different groups and the relation to different prognostic factors including the extent of surgical resection. Results: Out of 131 cases surgically explored only 26 cases had re exploration with 8 cases explored for residual disease 9 cases for remote recurrence or metastatic disease and the other 9 cases for other complications. Patients with low risk kept under follow up after surgery, out of those of low risk group (48 patients) only 8 patients (16.5%) shifted to intermediate risk. There were 20 patients (14.6%) diagnosed as intermediate risk received 3 cycles of compressed (Cisplatin, Etoposide and Bleomycin) and all high risk group patients 69patients (50.4%) received chemotherapy. Stage of disease was strongly and significantly related to overall survival with a poorer survival in late stages (stage IV) as compared to earlier stages. Conclusion: Overall survival rate at 3 three years was (76.7% ± 5.4, 3) years EFS was (77.8 % ±4.0), however 3 years DFS was much better (89.8 ± 3.4) in whole study group with ovarian tumors had significantly higher Overall survival (90% ± 5.1). Event Free Survival analysis showed that Male gender was 3 times likely to have bad events than females. Patients who underwent incomplete resection were 4 times more than patients with complete resection to have bad events. Disease free survival analysis showed that Patients who underwent incomplete surgery were 18.8 times liable for recurrence compared to those who underwent complete surgery, and patients who were exposed to re-excision were 21 times more prone to recurrence compared to other patients.Keywords: extragonadal, germ cell tumors, gonadal, pediatric
Procedia PDF Downloads 218449 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 142448 Drug Delivery Cationic Nano-Containers Based on Pseudo-Proteins
Authors: Sophio Kobauri, Temur Kantaria, Nina Kulikova, David Tugushi, Ramaz Katsarava
Abstract:
The elaboration of effective drug delivery vehicles is still topical nowadays since targeted drug delivery is one of the most important challenges of the modern nanomedicine. The last decade has witnessed enormous research focused on synthetic cationic polymers (CPs) due to their flexible properties, in particular as non-viral gene delivery systems, facile synthesis, robustness, not oncogenic and proven gene delivery efficiency. However, the toxicity is still an obstacle to the application in pharmacotherapy. For overcoming the problem, creation of new cationic compounds including the polymeric nano-size particles – nano-containers (NCs) loading with different pharmaceuticals and biologicals is still relevant. In this regard, a variety of NCs-based drug delivery systems have been developed. We have found that amino acid-based biodegradable polymers called as pseudo-proteins (PPs), which can be cleared from the body after the fulfillment of their function are highly suitable for designing pharmaceutical NCs. Among them, one of the most promising are NCs made of biodegradable Cationic PPs (CPPs). For preparing new cationic NCs (CNCs), we used CPPs composed of positively charged amino acid L-arginine (R). The CNCs were fabricated by two approaches using: (1) R-based homo-CPPs; (2) Blends of R-based CPPs with regular (neutral) PPs. According to the first approach NCs we prepared from CPPs 8R3 (composed of R, sebacic acid and 1,3-propanediol) and 8R6 (composed of R, sebacic acid and 1,6-hexanediol). The NCs prepared from these CPPs were 72-101 nm in size with zeta potential within +30 ÷ +35 mV at a concentration 6 mg/mL. According to the second approach, CPPs 8R6 was blended in organic phase with neutral PPs 8L6 (composed of leucine, sebacic acid and 1,6-hexanediol). The NCs prepared from the blends were 130-140 nm in size with zeta potential within +20 ÷ +28 mV depending on 8R6/8L6 ratio. The stability studies of fabricated NCs showed that no substantial change of the particle size and distribution and no big particles’ formation is observed after three months storage. In vitro biocompatibility study of the obtained NPs with four different stable cell lines: A549 (human), U-937 (human), RAW264.7 (murine), Hepa 1-6 (murine) showed both type cathionic NCs are biocompatible. The obtained data allow concluding that the obtained CNCs are promising for the application as biodegradable drug delivery vehicles. This work was supported by the joint grant from the Science and Technology Center in Ukraine and Shota Rustaveli National Science Foundation of Georgia #6298 'New biodegradable cationic polymers composed of arginine and spermine-versatile biomaterials for various biomedical applications'.Keywords: biodegradable polymers, cationic pseudo-proteins, nano-containers, drug delivery vehicles
Procedia PDF Downloads 155447 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 418446 Development a Home-Hotel-Hospital-School Community-Based Palliative Care Model for Patients with Cancer in Suratthani, Thailand
Authors: Patcharaporn Sakulpong, Wiriya Phokhwang
Abstract:
Background: Banpunrug (Love Sharing House) established in 2013 provides a community-based palliative care for patients with cancer from 7 provinces in southern Thailand. These patients come to receive outpatient chemotherapy and radiotherapy at Suratthani Cancer Hospital. They are poor and uneducated; they need an accommodation during their 30-45 day course of therapy. Methods: A community-participatory action research (PAR) was employed to establish a model of palliative care for patients with cancer. The participants included health care providers, community, and patients and families. The PAR process includes problem identification and need assessment, community and team establishment, field survey, organization founding, model of care planning, action and inquiry (PDCA), outcome evaluation, and model distribution. Results: The model of care at Banpunrug involves the concepts of HHHS model, in that Banpunrug is a Home for patients; patients live in a house comfortable like in a Hotel resource; the patients are given care and living facilities similarly to those in a Hospital; the house is a School for patients to learn how to take care themselves, how to live well with cancer, and most importantly how to prepare themselves for a good death. The house is also a humanized care school for health care providers. Banpunrug’s philosophy of care is based on friendship therapy, social and spiritual support, community partnership, patient-family centeredness, Live & Love sharing house, and holistic and humanized care. With this philosophy, the house is managed as a home of the patients and everyone involved; everything is costless for all eligible patients and their family members; all facilities and living expense are donated from benevolent people, friends, and community. Everyone, including patients and family, has a sense of belonging to the house and there is no authority between health care providers and the patients in the house. The house is situated in a temple and a community and supported by many local nonprofit organizations and healthcare facilities such as a health promotion hospital at sub-disctrict level and Suratthani Cancer Hospital. Village health volunteers and multi-professional health care volunteers have contributed not only appropriate care, but also knowledge and experience to develop a distinguishing HHHS community-based palliative care model for patients with cancer. Since its opening the house has been a home for more than 400 patients and 300 family members. It is also a model for many national and international healthcare organizations and providers, who come to visit and learn about palliative care in and by community. Conclusions: The success of this palliative care model comes from community involvement, multi-professional volunteers and distributions, and concepts of HHHS model. Banpunrug promotes a consistent care across the cancer trajectory independent of prognosis in order to strengthen a full integration of palliativeKeywords: community-based palliative care, model, participatory action research, patients with cancer
Procedia PDF Downloads 268445 The Church of San Paolo in Ferrara, Restoration and Accessibility
Authors: Benedetta Caglioti
Abstract:
The ecclesiastical complex of San Paolo in Ferrara represents a monument of great historical, religious and architectural importance. Its long and articulated story, over time, is already manifested by the mere reading of its planimetric and altimetric configuration, apparently unitary but, in reality, marked by modifications and repeated additions, even of high quality. It follows, in terms of protection, restoration and enhancement, a commitment of due respect for how the ancient building was built and enriched over its centuries of life. Hence a rigorous methodological approach, while being aware of the fact that every monument, in order to live and make use of the indispensable maintenance, must always be enjoyed and visited, therefore it must enjoy, in the right measure and compatibly with its nature, the possibility of improvements and functional, distributive, technological adjustments and related to the safety of people and things. The methodological approach substantiates the different elements of the project (such as distribution functionality, safety, structural solidity, environmental comfort, the character of the site, building and urban planning regulations, financial resources and materials, the same organization methods of the construction site) through the guiding principles of restoration, defined for a long time: the 'minimum intervention,' the 'recognisability' or 'distinguishability' of old and new, the Physico-chemical and figurative 'compatibility,' the 'durability' and the, at least potential, 'reversibility' of what is done, leading to the definition of appropriate "critical choices." The project tackles, together with the strictly functional ones, also the directly conservative and restoration issues, of a static, structural and material technology nature, with special attention to precious architectural surfaces, In order to ensure the best architectural quality through conscious enhancement, the project involves a redistribution of the interior and service spaces, an accurate lighting system inside and outside the church and a reorganization of the adjacent urban space. The reorganization of the interior is designed with particular attention to the issue of accessibility for people with disabilities. To accompany the community to regain possession of the use of the church's own space, already in its construction phase, the project proposal has hypothesized a permeability and flexibility in the management of the works such as to allow the perception of the found Monument to gradually become more and more familiar at the citizenship. Once the interventions have been completed, it is expected that the Church of San Paolo, second in importance only to the Cathedral, from which it is a few steps away, will be inserted in an already existing circuit of use of the city which over the years has systematized the different aspects of culture, the environment and tourism for the creation of greater awareness in the perception of what Ferrara can offer in cultural terms.Keywords: conservation, accessibility, regeneration, urban space
Procedia PDF Downloads 108444 Structure Conduct and Performance of Rice Milling Industry in Sri Lanka
Authors: W. A. Nalaka Wijesooriya
Abstract:
The increasing paddy production, stabilization of domestic rice consumption and the increasing dynamism of rice processing and domestic markets call for a rethinking of the general direction of the rice milling industry in Sri Lanka. The main purpose of the study was to explore levels of concentration in rice milling industry in Polonnaruwa and Hambanthota which are the major hubs of the country for rice milling. Concentration indices reveal that the rice milling industry in Polonnaruwa operates weak oligopsony and is highly competitive in Hambanthota. According to the actual quantity of paddy milling per day, 47 % is less than 8Mt/Day, while 34 % is 8-20 Mt/day, and the rest (19%) is greater than 20 Mt/day. In Hambanthota, nearly 50% of the mills belong to the range of 8-20 Mt/day. Lack of experience of the milling industry, poor knowledge on milling technology, lack of capital and finding an output market are the major entry barriers to the industry. Major problems faced by all the rice millers are the lack of a uniform electricity supply and low quality paddy. Many of the millers emphasized that the rice ceiling price is a constraint to produce quality rice. More than 80% of the millers in Polonnaruwa which is the major parboiling rice producing area have mechanical dryers. Nearly 22% millers have modern machineries like color sorters, water jet polishers. Major paddy purchasing method of large scale millers in Polonnaruwa is through brokers. In Hambanthota major channel is miller purchasing from paddy farmers. Millers in both districts have major rice selling markets in Colombo and suburbs. Huge variation can be observed in the amount of pledge (for paddy storage) loans. There is a strong relationship among the storage ability, credit affordability and the scale of operation of rice millers. The inter annual price fluctuation ranged 30%-35%. Analysis of market margins by using series of secondary data shows that farmers’ share on rice consumer price is stable or slightly increases in both districts. In Hambanthota a greater share goes to the farmer. Only four mills which have obtained the Good Manufacturing Practices (GMP) certification from Sri Lanka Standards Institution can be found. All those millers are small quantity rice exporters. Priority should be given for the Small and medium scale millers in distribution of storage paddy of PMB during the off season. The industry needs a proper rice grading system, and it is recommended to introduce a ceiling price based on graded rice according to the standards. Both husk and rice bran were underutilized. Encouraging investment for establishing rice oil manufacturing plant in Polonnaruwa area is highly recommended. The current taxation procedure needs to be restructured in order to ensure the sustainability of the industry.Keywords: conduct, performance, structure (SCP), rice millers
Procedia PDF Downloads 328443 Ex-vivo Bio-distribution Studies of a Potential Lung Perfusion Agent
Authors: Shabnam Sarwar, Franck Lacoeuille, Nadia Withofs, Roland Hustinx
Abstract:
After the development of a potential surrogate of MAA, and its successful application for the diagnosis of pulmonary embolism in artificially embolized rats’ lungs, this microparticulate system were radiolabelled with gallium-68 to synthesize 68Ga-SBMP with high radiochemical purity >99%. As a prerequisite step of clinical trials, 68Ga- labelled starch based microparticles (SBMP) were analysed for their in-vivo behavior in small animals. The purpose of the presented work includes the ex-vivo biodistribution studies of 68Ga-SBMP in order to assess the activity uptake in target organs with respect to time, excretion pathways of the radiopharmaceutical, %ID/g in major organs, T/NT ratios, in-vivo stability of the radiotracer and subsequently the microparticles in the target organs. Radiolabelling of starch based microparticles was performed by incubating it with 68Ga generator eluate (430±26 MBq) at room temperature and pressure without using any harsh reaction condition. For Ex-vivo biodistribution studies healthy White Wistar rats weighing between 345-460 g were injected intravenously 68Ga-SBMP 20±8 MBq, containing about 2,00,000-6,00,000 SBMP particles in a volume of 700µL. The rats were euthanized at predefined time intervals (5min, 30min, 60min and 120min) and their organ parts were cut, washed, and put in the pre-weighed tubes and measured for radioactivity counts through automatic Gamma counter. The 68Ga-SBMP produced >99% RCP just after 10-20 min incubation through a simple and robust procedure. Biodistribution of 68Ga-SBMP showed that initially just after 5 min post injection major uptake was observed in the lungs following by blood, heart, liver, kidneys, bladder, urine, spleen, stomach, small intestine, colon, skin and skeleton, thymus and at last the smallest activity was found in brain. Radioactivity counts stayed stable in lungs with gradual decrease with the passage of time, and after 2h post injection, almost half of the activity were seen in lungs. This is a sufficient time to perform PET/CT lungs scanning in humans while activity in the liver, spleen, gut and urinary system decreased with time. The results showed that urinary system is the excretion pathways instead of hepatobiliary excretion. There was a high value of T/NT ratios which suggest fine tune images for PET/CT lung perfusion studies henceforth further pre-clinical studies and then clinical trials should be planned in order to utilize this potential lung perfusion agent.Keywords: starch based microparticles, gallium-68, biodistribution, target organs, excretion pathways
Procedia PDF Downloads 173442 Conserving Naubad Karez Cultural Landscape – a Multi-Criteria Approach to Urban Planning
Authors: Valliyil Govindankutty
Abstract:
Human civilizations across the globe stand testimony to water being one of the major interaction points with nature. The interactions with nature especially in drier areas revolve around water, be it harnessing, transporting, usage and management. Many ingenious ideas were born, nurtured and developed for harnessing, transporting, storing and distributing water through the areas in the drier parts of the world. Many methods of water extraction, collection and management could be found throughout the world, some of which are associated with efficient, sustained use of surface water, ground water and rain water. Karez is one such ingenious method of collection, transportation, storage and distribution of ground water. Most of the Karez systems in India were developed during reign of Muslim dynasties with ruling class descending from Persia or having influential connections and inviting expert engineers from there. Karez have strongly influenced the village socio-economic organisations due to multitude of uses they were brought into. These are masterpiece engineering structures to collect groundwater and direct it, through a subsurface gallery with a gradual slope, to surface canals that provide water to settlements and agricultural fields. This ingenious technology, karez was result of need for harnessing groundwater in arid areas like that of Bidar. The study views this traditional technology in historical perspective linked to sustainable utilization and management of groundwater and above all the immediate environment. The karez system is one of the best available demonstration of human ingenuity and adaptability to situations and locations of water scarcity. Bidar, capital of erstwhile Bahmani sultanate with a history of more than 700 years or more is one of the heritage cities of present Karnataka State. The unique water systems of Bidar along with other historic entities have been listed under World Heritage Watch List by World Monument Fund. The Historical or cultural landscape in Bidar is very closely associated to the natural resources of the region, Karez systems being one of the best examples. The Karez systems were the lifeline of Bidar’s historical period providing potable water, fulfilling domestic and irrigation needs, both within and outside the fort enclosures. These systems are still functional, but under great pressure and threat of rapid and unplanned urbanisation. The change in land use and fragmentation of land are already paving way for irreversible modification of the karez cultural and geographic landscape. The Paper discusses the significance of character defining elements of Naubad Karez Landscape, highlights the importance of conserving cultural heritage and presents a geographical approach to its revival.Keywords: Karez, groundwater, traditional water harvesting, cultural heritage landscape, urban planning
Procedia PDF Downloads 494441 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes
Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão
Abstract:
The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.Keywords: eddy current separation, particle size, numerical simulation, metal recovery
Procedia PDF Downloads 89440 Monitoring Soil Moisture Dynamic in Root Zone System of Argania spinosa Using Electrical Resistivity Imaging
Authors: F. Ainlhout, S. Boutaleb, M. C. Diaz-Barradas, M. Zunzunegui
Abstract:
Argania spinosa is an endemic tree of the southwest of Morocco, occupying 828,000 Ha, distributed mainly between Mediterranean vegetation and the desert. This tree can grow in extremely arid regions in Morocco, where annual rainfall ranges between 100-300 mm where no other tree species can live. It has been designated as a UNESCO Biosphere reserve since 1998. Argania tree is of great importance in human and animal feeding of rural population as well as for oil production, it is considered as a multi-usage tree. Admine forest located in the suburbs of Agadir city, 5 km inland, was selected to conduct this work. The aim of the study was to investigate the temporal variation in root-zone moisture dynamic in response to variation in climatic conditions and vegetation water uptake, using a geophysical technique called Electrical resistivity imaging (ERI). This technique discriminates resistive woody roots, dry and moisture soil. Time-dependent measurements (from April till July) of resistivity sections were performed along the surface transect (94 m Length) at 2 m fixed electrode spacing. Transect included eight Argan trees. The interactions between the tree and soil moisture were estimated by following the tree water status variations accompanying the soil moisture deficit. For that purpose we measured midday leaf water potential and relative water content during each sampling day, and for the eight trees. The first results showed that ERI can be used to accurately quantify the spatiotemporal distribution of root-zone moisture content and woody root. The section obtained shows three different layers: middle conductive one (moistured); a moderately resistive layer corresponding to relatively dry soil (calcareous formation with intercalation of marly strata) on top, this layer is interspersed by very resistant layer corresponding to woody roots. Below the conductive layer, we find the moderately resistive layer. We note that throughout the experiment, there was a continuous decrease in soil moisture at the different layers. With the ERI, we can clearly estimate the depth of the woody roots, which does not exceed 4 meters. In previous work on the same species, analyzing the δ18O in water of xylem and in the range of possible water sources, we argued that rain is the main water source in winter and spring, but not in summer, trees are not exploiting deep water from the aquifer as the popular assessment, instead of this they are using soil water at few meter depth. The results of the present work confirm the idea that the roots of Argania spinosa are not growing very deep.Keywords: Argania spinosa, electrical resistivity imaging, root system, soil moisture
Procedia PDF Downloads 328439 Deasphalting of Crude Oil by Extraction Method
Authors: A. N. Kurbanova, G. K. Sugurbekova, N. K. Akhmetov
Abstract:
The asphaltenes are heavy fraction of crude oil. Asphaltenes on oilfield is known for its ability to plug wells, surface equipment and pores of the geologic formations. The present research is devoted to the deasphalting of crude oil as the initial stage refining oil. Solvent deasphalting was conducted by extraction with organic solvents (cyclohexane, carbon tetrachloride, chloroform). Analysis of availability of metals was conducted by ICP-MS and spectral feature at deasphalting was achieved by FTIR. High contents of asphaltenes in crude oil reduce the efficiency of refining processes. Moreover, high distribution heteroatoms (e.g., S, N) were also suggested in asphaltenes cause some problems: environmental pollution, corrosion and poisoning of the catalyst. The main objective of this work is to study the effect of deasphalting process crude oil to improve its properties and improving the efficiency of recycling processes. Experiments of solvent extraction are using organic solvents held in the crude oil JSC “Pavlodar Oil Chemistry Refinery. Experimental results show that deasphalting process also leads to decrease Ni, V in the composition of the oil. One solution to the problem of cleaning oils from metals, hydrogen sulfide and mercaptan is absorption with chemical reagents directly in oil residue and production due to the fact that asphalt and resinous substance degrade operational properties of oils and reduce the effectiveness of selective refining of oils. Deasphalting of crude oil is necessary to separate the light fraction from heavy metallic asphaltenes part of crude oil. For this oil is pretreated deasphalting, because asphaltenes tend to form coke or consume large quantities of hydrogen. Removing asphaltenes leads to partly demetallization, i.e. for removal of asphaltenes V/Ni and organic compounds with heteroatoms. Intramolecular complexes are relatively well researched on the example of porphyinous complex (VO2) and nickel (Ni). As a result of studies of V/Ni by ICP MS method were determined the effect of different solvents-deasphalting – on the process of extracting metals on deasphalting stage and select the best organic solvent. Thus, as the best DAO proved cyclohexane (C6H12), which as a result of ICP MS retrieves V-51.2%, Ni-66.4%? Also in this paper presents the results of a study of physical and chemical properties and spectral characteristics of oil on FTIR with a view to establishing its hydrocarbon composition. Obtained by using IR-spectroscopy method information about the specifics of the whole oil give provisional physical, chemical characteristics. They can be useful in the consideration of issues of origin and geochemical conditions of accumulation of oil, as well as some technological challenges. Systematic analysis carried out in this study; improve our understanding of the stability mechanism of asphaltenes. The role of deasphalted crude oil fractions on the stability asphaltene is described.Keywords: asphaltenes, deasphalting, extraction, vanadium, nickel, metalloporphyrins, ICP-MS, IR spectroscopy
Procedia PDF Downloads 242438 Association between TNF-α and Its Receptor TNFRSF1B Polymorphism with Pulmonary Tuberculosis in Tomsk, Russia Federation
Authors: K. A. Gladkova, N. P. Babushkina, E. Y. Bragina
Abstract:
Purpose: Tuberculosis (TB), caused by Mycobacterium tuberculosis, is one of the major public health problems worldwide. It is clear that the immune response to M. tuberculosis infection is a relationship between inflammatory and anti-inflammatory responses in which Tumour Necrosis Factor-α (TNF-α) plays key roles as a pro-inflammatory cytokine. TNF-α involved in various cell immune responses via binding to its two types of membrane-bound receptors, TNFRSF1A and TNFRSF1B. Importantly, some variants of the TNFRSF1B gene have been considered as possible markers of host susceptibility to TB. However, the possible impact of such TNF-α and its receptor genes polymorphism on TB cases in Tomsk is missing. Thus, the purpose of our study was to investigate polymorphism of TNF-α (rs1800629) and its receptor TNFRSF1B (rs652625 and rs525891) genes in population of Tomsk and to evaluate their possible association with the development of pulmonary TB. Materials and Methods: The population distribution features of genes polymorphisms were investigated and made case-control study based on group of people from Tomsk. Human blood was collected during routine patients examination at Tomsk Regional TB Dispensary. Altogether, 234 TB-positive patients (80 women, 154 men, average age is 28 years old) and 205 health-controls (153 women, 52 men, average age is 47 years old) were investigated. DNA was extracted from blood plasma by phenol-chloroform method. Genotyping was carried out by a single-nucleotide-specific real-time PCR assay. Results: First, interpopulational comparison was carried out between healthy individuals from Tomsk and available data from the 1000 Genomes project. It was found that polymorphism rs1800629 region demonstrated that Tomsk population was significantly different from Japanese (P = 0.0007), but it was similar with the following Europeans subpopulations: Italians (P = 0.052), Finns (P = 0.124) and British (P = 0.910). Polymorphism rs525891 clear demonstrated that group from Tomsk was significantly different from population of South Africa (P = 0.019). However, rs652625 demonstrated significant differences from Asian population: Chinese (P = 0.03) and Japanese (P = 0.004). Next, we have compared healthy individuals versus patients with TB. It was detected that no association between rs1800629, rs652625 polymorphisms, and positive TB cases. Importantly, AT genotype of polymorphism rs525891 was significantly associated with resistance to TB (odds ratio (OR) = 0.61; 95% confidence interval (CI): 0.41-0.9; P < 0.05). Conclusion: To the best of our knowledge, the polymorphism of TNFRSF1B (rs525891) was associated with TB, while genotype AT is protective [OR = 0.61] in Tomsk population. In contrast, no significant correlation was detected between polymorphism TNF-α (rs1800629) and TNFRSF1B (rs652625) genes and alveolar TB cases among population of Tomsk. In conclusion, our data expands the molecular particularities associated with TB. The study was supported by the grant of the Russia for Basic Research #15-04-05852.Keywords: polymorphism, tuberculosis, TNF-α, TNFRSF1B gene
Procedia PDF Downloads 180437 Peculiarities of Snow Cover in Belarus
Authors: Aleh Meshyk, Anastasiya Vouchak
Abstract:
On the average snow covers Belarus for 75 days in the south-west and 125 days in the north-east. During the cold season snowpack often destroys due to thaws, especially at the beginning and end of winter. Over 50% of thawing days have a positive mean daily temperature, which results in complete snow melting. For instance, in December 10% of thaws occur at 4 С mean daily temperature. Stable snowpack lying for over a month forms in the north-east in the first decade of December but in the south-west in the third decade of December. The cover disappears in March: in the north-east in the last decade but in the south-west in the first decade. This research takes into account that precipitation falling during a cold season could be not only liquid and solid but also a mixed type (about 10-15 % a year). Another important feature of snow cover is its density. In Belarus, the density of freshly fallen snow ranges from 0.08-0.12 g/cm³ in the north-east to 0.12-0.17 g/cm³ in the south-west. Over time, snow settles under its weight and after melting and refreezing. Averaged annual density of snow at the end of January is 0.23-0.28 g/сm³, in February – 0.25-0.30 g/сm³, in March – 0.29-0.36 g/сm³. Sometimes it can be over 0.50 g/сm³ if the snow melts too fast. The density of melting snow saturated with water can reach 0.80 g/сm³. Average maximum of snow depth is 15-33 cm: minimum is in Brest, maximum is in Lyntupy. Maximum registered snow depth ranges within 40-72 cm. The water content in snowpack, as well as its depth and density, reaches its maximum in the second half of February – beginning of March. Spatial distribution of the amount of liquid in snow corresponds to the trend described above, i.e. it increases in the direction from south-west to north-east and on the highlands. Average annual value of maximum water content in snow ranges from 35 mm in the south-west to 80-100 mm in the north-east. The water content in snow is over 80 mm on the central Belarusian highland. In certain years it exceeds 2-3 times the average annual values. Moderate water content in snow (80-95 mm) is characteristic of western highlands. Maximum water content in snow varies over the country from 107 mm (Brest) to 207 mm (Novogrudok). Maximum water content in snow varies significantly in time (in years), which is confirmed by high variation coefficient (Cv). Maximums (0.62-0.69) are in the south and south-west of Belarus. Minimums (0.42-0.46) are in central and north-eastern Belarus where snow cover is more stable. Since 1987 most gauge stations in Belarus have observed a trend to a decrease in water content in snow. It is confirmed by the research. The biggest snow cover forms on the highlands in central and north-eastern Belarus. Novogrudok, Minsk, Volkovysk, and Sventayny highlands are a natural orographic barrier which prevents snow-bringing air masses from penetrating inside the country. The research is based on data from gauge stations in Belarus registered from 1944 to 2014.Keywords: density, depth, snow, water content in snow
Procedia PDF Downloads 161436 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 207