Search results for: capital calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2755

Search results for: capital calculation

475 Analysis of Socio-Economics of Tuna Fisheries Management (Thunnus Albacares Marcellus Decapterus) in Makassar Waters Strait and Its Effect on Human Health and Policy Implications in Central Sulawesi-Indonesia

Authors: Siti Rahmawati

Abstract:

Indonesia has had long period of monetary economic crisis and it is followed by an upward trend in the price of fuel oil. This situation impacts all aspects of tuna fishermen community. For instance, the basic needs of fishing communities increase and the lower purchasing power then lead to economic and social instability as well as the health of fishermen household. To understand this AHP method is applied to acknowledge the model of tuna fisheries management priorities and cold chain marketing channel and the utilization levels that impact on human health. The study is designed as a development research with the number of 180 respondents. The data were analyzed by Analytical Hierarchy Process (AHP) method. The development of tuna fishery business can improve productivity of production with economic empowerment activities for coastal communities, improving the competitiveness of products, developing fish processing centers and provide internal capital for the development of optimal fishery business. From economic aspects, fishery business is more attracting because the benefit cost ratio of 2.86. This means that for 10 years, the economic life of this project can work well as B/C> 1 and therefore the rate of investment is economically viable. From the health aspects, tuna can reduce the risk of dying from heart disease by 50%, because tuna contain selenium in the human body. The consumption of 100 g of tuna meet 52.9% of the selenium in the body and activating the antioxidant enzyme glutathione peroxidaxe which can protect the body from free radicals and stimulate various cancers. The results of the analytic hierarchy process that the quality of tuna products is the top priority for export quality as well as quality control in order to compete in the global market. The implementation of the policy can increase the income of fishermen and reduce the poverty of fishermen households and have impact on the human health whose has high risk of disease.

Keywords: management of tuna, social, economic, health

Procedia PDF Downloads 307
474 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer

Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott

Abstract:

The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.

Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD

Procedia PDF Downloads 494
473 A Radiofrequency Based Navigation Method for Cooperative Robotic Communities in Surface Exploration Missions

Authors: Francisco J. García-de-Quirós, Gianmarco Radice

Abstract:

When considering small robots working in a cooperative community for Moon surface exploration, navigation and inter-nodes communication aspects become a critical issue for the mission success. For this approach to succeed, it is necessary however to deploy the required infrastructure for the robotic community to achieve efficient self-localization as well as relative positioning and communications between nodes. In this paper, an exploration mission concept in which two cooperative robotic systems co-exist is presented. This paradigm hinges on a community of reference agents that provide support in terms of communication and navigation to a second agent community tasked with exploration goals. The work focuses on the role of the agent community in charge of the overall support and, more specifically, will focus on the positioning and navigation methods implemented in RF microwave bands, which are combined with the communication services. An analysis of the different methods for range and position calculation are presented, as well as the main limiting factors for precision and resolution, such as phase and frequency noise in RF reference carriers and drift mechanisms such as thermal drift and random walk. The effects of carrier frequency instability due to phase noise are categorized in different contributing bands, and the impact of these spectrum regions are considered both in terms of the absolute position and the relative speed. A mission scenario is finally proposed, and key metrics in terms of mass and power consumption for the required payload hardware are also assessed. For this purpose, an application case involving an RF communication network in UHF Band is described, in coexistence with a communications network used for the single agents to communicate within the both the exploring agents as well as the community and with the mission support agents. The proposed approach implements a substantial improvement in planetary navigation since it provides self-localization capabilities for robotic agents characterized by very low mass, volume and power budgets, thus enabling precise navigation capabilities to agents of reduced dimensions. Furthermore, a common and shared localization radiofrequency infrastructure enables new interaction mechanisms such as spatial arrangement of agents over the area of interest for distributed sensing.

Keywords: cooperative robotics, localization, robot navigation, surface exploration

Procedia PDF Downloads 276
472 A Furniture Industry Concept for a Sustainable Generative Design Platform Employing Robot Based Additive Manufacturing

Authors: Andrew Fox, Tao Zhang, Yuanhong Zhao, Qingping Yang

Abstract:

The furniture manufacturing industry has been slow in general to adopt the latest manufacturing technologies, historically relying heavily upon specialised conventional machinery. This approach not only requires high levels of specialist process knowledge, training, and capital investment but also suffers from significant subtractive manufacturing waste and high logistics costs due to the requirement for centralised manufacturing, with high levels of furniture product not re-cycled or re-used. This paper aims to address the problems by introducing suitable digital manufacturing technologies to create step changes in furniture manufacturing design, as the traditional design practices have been reported as building in 80% of environmental impact. In this paper, a 3D printing robot for furniture manufacturing is reported. The 3D printing robot mainly comprises a KUKA industrial robot, an Arduino microprocessor, and a self-assembled screw fed extruder. Compared to traditional 3D printer, the 3D printing robot has larger motion range and can be easily upgraded to enlarge the maximum size of the printed object. Generative design is also investigated in this paper, aiming to establish a combined design methodology that allows assessment of goals, constraints, materials, and manufacturing processes simultaneously. ‘Matrixing’ for part amalgamation and product performance optimisation is enabled. The generative design goals of integrated waste reduction increased manufacturing efficiency, optimised product performance, and reduced environmental impact institute a truly lean and innovative future design methodology. In addition, there is massive future potential to leverage Single Minute Exchange of Die (SMED) theory through generative design post-processing of geometry for robot manufacture, resulting in ‘mass customised’ furniture with virtually no setup requirements. These generatively designed products can be manufactured using the robot based additive manufacturing. Essentially, the 3D printing robot is already functional; some initial goals have been achieved and are also presented in this paper.

Keywords: additive manufacturing, generative design, robot, sustainability

Procedia PDF Downloads 114
471 Piaui Solar: State Development Impulsed by Solar Photovoltaic Energy

Authors: Amanda Maria Rodrigues Barroso, Ary Paixao Borges Santana Junior, Caio Araujo Damasceno

Abstract:

In Piauí, the Brazilian state, solar energy has become one of the renewable sources targeted by internal and external investments, with the intention of leveraging the development of society. However, for a residential or business consumer to be able to deploy this source, there is usually a need for a high initial investment due to its high cost. The countless high taxes on equipment and services are one of the factors that contribute to this cost and ultimately fall on the consumer. Through analysis, a way of reducing taxes is sought in order to encourage consumer adhesion to the use of photovoltaic solar energy. Thus, the objective is to implement the Piauí Solar Program in the state of Piauí in order to stimulate the deployment of photovoltaic solar energy, through benefits granted to users, providing state development by boosting the diversification of the state's energy matrix. The research method adopted was based on the analysis of data provided by the Teresina City Hall, by the Brazilian Institute of Geography and Statistics and by a private company in the capital of Piauí. The account was taken of the total amount paid in Property and Urban Territorial Property Tax (IPTU), in electricity and in the service of installing photovoltaic panels in a residence with 6 people. Through Piauí Solar, a discount of 80% would be applied to the taxes present in the budgets regarding the implementation of these photovoltaic plates in homes and businesses, as well as in the IPTU. In addition, another factor also taken into account is the energy savings generated after the implementation of these boards. In the studied residence, the annual payment of IPTU went from R $ 99.83 reais to R $ 19.96, the reduction of taxes present in the budget for the implantation of solar panels, caused the value to increase from R $ 42,744.22 to R $ 37,241.98. The annual savings in electricity bills were estimated at around R $ 6,000. Therefore, there is a reduction of approximately 24% in the total invested. The trend of the Piauí Solar program, then, is to bring benefits to the state, providing an improvement in the living conditions of the population, through the savings generated by this program. In addition, an increase in the diversification of the Piauí energy matrix can be seen with the advancement of the use of this renewable energy.

Keywords: development, economy, energy, taxes

Procedia PDF Downloads 121
470 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 259
469 Modeling of the Heat and Mass Transfer in Fluids through Thermal Pollution in Pipelines

Authors: V. Radulescu, S. Dumitru

Abstract:

Introduction: Determination of the temperature field inside a fluid in motion has many practical issues, especially in the case of turbulent flow. The phenomenon is greater when the solid walls have a different temperature than the fluid. The turbulent heat and mass transfer have an essential role in case of the thermal pollution, as it was the recorded during the damage of the Thermoelectric Power-plant Oradea (closed even today). Basic Methods: Solving the theoretical turbulent thermal pollution represents a particularly difficult problem. By using the semi-empirical theories or by simplifying the made assumptions, based on the experimental measurements may be assured the elaboration of the mathematical model for further numerical simulations. The three zones of flow are analyzed separately: the vicinity of the solid wall, the turbulent transition zone, and the turbulent core. For each area are determined the distribution law of temperature. It is determined the dependence of between the Stanton and Prandtl numbers with correction factors, based on measurements experimental. Major Findings/Results: The limitation of the laminar thermal substrate was determined based on the theory of Landau and Levice, using the assumption that the longitudinal component of the velocity pulsation and the pulsation’s frequency varies proportionally with the distance to the wall. For the calculation of the average temperature, the formula is used a similar solution as for the velocity, by an analogous mediation. On these assumptions, the numerical modeling was performed with a gradient of temperature for the turbulent flow in pipes (intact or damaged, with cracks) having 4 different diameters, between 200-500 mm, as there were in the Thermoelectric Power-plant Oradea. Conclusions: It was made a superposition between the molecular viscosity and the turbulent one, followed by addition between the molecular and the turbulent transfer coefficients, necessary to elaborate the theoretical and the numerical modeling. The concept of laminar boundary layer has a different thickness when it is compared the flow with heat transfer and that one without a temperature gradient. The obtained results are within the margin of error of 5%, between the semi-empirical classical theories and the developed model, based on the experimental data. Finally, it is obtained a general correlation between the Stanton number and the Prandtl number, for a specific flow (with associated Reynolds number).

Keywords: experimental measurements, numerical correlations, thermal pollution through pipelines, turbulent thermal flow

Procedia PDF Downloads 155
468 Analysis of the Production Time in a Pharmaceutical Company

Authors: Hanen Khanchel, Karim Ben Kahla

Abstract:

Pharmaceutical companies are facing competition. Indeed, the price differences between competing products can be such that it becomes difficult to compensate them by differences in value added. The conditions of competition are no longer homogeneous for the players involved. The price of a product is a given that puts a company and its customer face to face. However, price fixing obliges the company to consider internal factors relating to production costs and external factors such as customer attitudes, the existence of regulations and the structure of the market on which the firm evolved. In setting the selling price, the company must first take into account internal factors relating to its costs: costs of production fall into two categories, fixed costs and variable costs that depend on the quantities produced. The company cannot consider selling below what it costs the product. It, therefore, calculates the unit cost of production to which it adds the unit cost of distribution, enabling it to know the unit cost of production of the product. The company adds its margin and thus determines its selling price. The margin is used to remunerate the capital providers and to finance the activity of the company and its investments. Production costs are related to the quantities produced: large-scale production generally reduces the unit cost of production, which is an asset for companies with mass production markets. This shows that small and medium-sized companies with limited market segments need to make greater efforts to ensure their profit margins. As a result, and faced with high and low market prices for raw materials and increasing staff costs, the company must seek to optimize its production time in order to reduce loads and eliminate waste. Then, the customer pays only value added. Thus, and based on this principle we decided to create a project that deals with the problem of waste in our company, and having as objectives the reduction of production costs and improvement of performance indicators. This paper presents the implementation of the Value Stream Mapping (VSM) project in a pharmaceutical company. It is structured as follows: 1) determination of the family of products, 2) drawing of the current state, 3) drawing of the future state, 4) action plan and implementation.

Keywords: VSM, waste, production time, kaizen, cartography, improvement

Procedia PDF Downloads 136
467 Applying Risk Taking in Islamic Finance: A Fiqhī Viewpoint

Authors: Mohamed Fairooz Abdul Khir

Abstract:

The linkage between liability for risk and legitimacy of reward is a governing principle that must be fully observed in financial transactions. It is the cornerstone of any Islamic business or financial deal. The absence of risk taking principle may give rise to numerous prohibited elements such as ribā, gharar and gambling that violate the objectives of financial transactions. However, fiqhī domains from which it emanates have not been clearly spelled out by the scholars. In addition, the concept of risk taking in relation to contemporary risks associated with financial contracts, such as credit risk, liquidity risk, reputational risk and market risk, needs further scrutiny as regard their Sharīʿah bases. Hence, this study is imperatively significant to prove that absence of risk taking concept in Islamic financial instruments give rise to prohibited elements particularly ribā. This study is primarily intended to clarify the concept of risk in Islamic financial transactions from the fiqhī perspective and evaluate analytically the selected issues involving risk taking based on the established concept of risk taking from fiqhī viewpoint. The selected issues are amongst others charging cost of fund on defaulting customers, holding the lessee liable for total loss of leased asset under ijārah thumma al-bayʿ and capital guarantee under mushārakah based instruments. This is a library research in which data has been collected from various materials such as classical fiqh books, regulators’ policy guidelines and journal articles. This study employed deductive and inductive methods to analyze the data critically in search for conclusive findings. It suggests that business risks have to be evaluated based on their subjects namely (i) property (māl) and (ii) work (ʿamal) to ensure that Islamic financial instruments structured based on certain Sharīʿah principles are not diverted from the risk taking concept embedded in them. Analysis of the above selected cases substantiates that when risk taking principle is breached, the prohibited elements such as ribā, gharar and maysir do arise and that they impede the realization of the maqāṣid al-Sharīʿah intended from Islamic financial contracts.

Keywords: Islamic finance, ownership risk, ribā, risk taking

Procedia PDF Downloads 319
466 The EU Omnipotence Paradox: Inclusive Cultural Policies and Effects of Exclusion

Authors: Emmanuel Pedler, Elena Raevskikh, Maxime Jaffré

Abstract:

Can the cultural geography of European cities be durably managed by European policies? To answer this question, two hypotheses can be proposed. (1) Either European cultural policies are able to erase cultural inequalities between the territories through the creation of new areas of cultural attractiveness in each beneficiary neighborhood, city or country. Or, (2) each European region historically rooted in a number of endogenous socio-historical, political or demographic factors is not receptive to exogenous political influences. Thus, the cultural attractiveness of a territory is difficult to measure and to impact by top-down policies in the long term. How do these two logics - European and local - interact and contribute to the emergence of a valued, popular sense of a common European cultural identity? Does this constant interaction between historical backgrounds and new political concepts encourage a positive identification with the European project? The European cultural policy programs, such as ECC (European Capital of Culture), seek to develop new forms of civic cohesion through inclusive and participative cultural events. The cultural assets of a city elected ‘ECC’ are mobilized to attract a wide range of new audiences, including populations poorly integrated into local cultural life – and consequently distant from pre-existing cultural offers. In the current context of increasingly heterogeneous individual perceptions of Europe, the ECC program aims to promote cultural forms and institutions that should accelerate both territorial and cross-border European cohesion. The new cultural consumption pattern is conceived to stimulate integration and mobility, but also to create a legitimate and transnational ideal European citizen type. Our comparative research confronts contrasting cases of ‘European Capitals of Culture’ from the south and from the north of Europe, cities recently concerned by the ECC political mechanism and cities that were elected ECC in the past, multi-centered cultural models vs. highly centralized cultural models. We aim to explore the impacts of European policies on the urban cultural geography, but also to understand the current obstacles for its efficient implementation.

Keywords: urbanism, cultural policies, cultural institutions, european cultural capitals, heritage industries, exclusion effects

Procedia PDF Downloads 249
465 Economics of Precision Mechanization in Wine and Table Grape Production

Authors: Dean A. McCorkle, Ed W. Hellman, Rebekka M. Dudensing, Dan D. Hanselka

Abstract:

The motivation for this study centers on the labor- and cost-intensive nature of wine and table grape production in the U.S., and the potential opportunities for precision mechanization using robotics to augment those production tasks that are labor-intensive. The objectives of this study are to evaluate the economic viability of grape production in five U.S. states under current operating conditions, identify common production challenges and tasks that could be augmented with new technology, and quantify a maximum price for new technology that growers would be able to pay. Wine and table grape production is primed for precision mechanization technology as it faces a variety of production and labor issues. Methodology: Using a grower panel process, this project includes the development of a representative wine grape vineyard in five states and a representative table grape vineyard in California. The panels provided production, budget, and financial-related information that are typical for vineyards in their area. Labor costs for various production tasks are of particular interest. Using the data from the representative budget, 10-year projected financial statements have been developed for the representative vineyard and evaluated using a stochastic simulation model approach. Labor costs for selected vineyard production tasks were evaluated for the potential of new precision mechanization technology being developed. These tasks were selected based on a variety of factors, including input from the panel members, and the extent to which the development of new technology was deemed to be feasible. The net present value (NPV) of the labor cost over seven years for each production task was derived. This allowed for the calculation of a maximum price for new technology whereby the NPV of labor costs would equal the NPV of purchasing, owning, and operating new technology. Expected Results: The results from the stochastic model will show the projected financial health of each representative vineyard over the 2015-2024 timeframe. Investigators have developed a preliminary list of production tasks that have the potential for precision mechanization. For each task, the labor requirements, labor costs, and the maximum price for new technology will be presented and discussed. Together, these results will allow technology developers to focus and prioritize their research and development efforts for wine and table grape vineyards, and suggest opportunities to strengthen vineyard profitability and long-term viability using precision mechanization.

Keywords: net present value, robotic technology, stochastic simulation, wine and table grapes

Procedia PDF Downloads 247
464 Bandgap Engineering of CsMAPbI3-xBrx Quantum Dots for Intermediate Band Solar Cell

Authors: Deborah Eric, Abbas Ahmad Khan

Abstract:

Lead halide perovskites quantum dots have attracted immense scientific and technological interest for successful photovoltaic applications because of their remarkable optoelectronic properties. In this paper, we have simulated CsMAPbI3-xBrx based quantum dots to implement their use in intermediate band solar cells (IBSC). These types of materials exhibit optical and electrical properties distinct from their bulk counterparts due to quantum confinement. The conceptual framework provides a route to analyze the electronic properties of quantum dots. This layer of quantum dots optimizes the position and bandwidth of IB that lies in the forbidden region of the conventional bandgap. A three-dimensional MAPbI3 quantum dot (QD) with geometries including spherical, cubic, and conical has been embedded in the CsPbBr3 matrix. Bound energy wavefunction gives rise to miniband, which results in the formation of IB. If there is more than one miniband, then there is a possibility of having more than one IB. The optimization of QD size results in more IBs in the forbidden region. One band time-independent Schrödinger equation using the effective mass approximation with step potential barrier is solved to compute the electronic states. Envelope function approximation with BenDaniel-Duke boundary condition is used in combination with the Schrödinger equation for the calculation of eigen energies and Eigen energies are solved for the quasi-bound states using an eigenvalue study. The transfer matrix method is used to study the quantum tunneling of MAPbI3 QD through neighbor barriers of CsPbI3. Electronic states are computed using Schrödinger equation with effective mass approximation by considering quantum dot and wetting layer assembly. Results have shown the varying the quantum dot size affects the energy pinning of QD. Changes in the ground, first, second state energies have been observed. The QD is non-zero at the center and decays exponentially to zero at boundaries. Quasi-bound states are characterized by envelope functions. It has been observed that conical quantum dots have maximum ground state energy at a small radius. Increasing the wetting layer thickness exhibits energy signatures similar to bulk material for each QD size.

Keywords: perovskite, intermediate bandgap, quantum dots, miniband formation

Procedia PDF Downloads 154
463 Room Temperature Ionic Liquids Filled Mixed Matrix Membranes for CO2 Separation

Authors: Asim Laeeq Khan, Mazhar Amjad Gilani, Tayub Raza

Abstract:

The use of fossil fuels for energy generation leads to the emission of greenhouse gases particularly CO2 into the atmosphere. To date, several techniques have been proposed for the efficient removal of CO2 from flue gas mixtures. Membrane technology is a promising choice due to its several inherent advantages such as low capital cost, high energy efficiency, and low ecological footprint. One of the goals in the development of membranes is to achieve high permeability and selectivity. Mixed matrix membranes comprising of inorganic fillers embedded in polymer matrix are a class of membranes that have showed improved separation properties. One of the biggest challenges in the commercialization if mixed matrix membranes are the removal of non-selective voids existing at the polymer-filler interface. In this work, mixed matrix membranes were prepared using polysulfone as polymer matrix and ordered mesoporous MCM-41 as filler materials. A new approach to removing the interfacial voids was developed by introducing room temperature ionic (RTIL) at the polymer-filler interface. The results showed that the imidazolium based RTIL not only provided wettability characteristics but also helped in further improving the separation properties. The removal of interfacial voids and good contact between polymer and filler was verified by SEM measurement. The synthesized membranes were tested in a custom built gas permeation set-up for the measurement of gas permeability and ideal gas selectivity. The results showed that the mixed matrix membranes showed significantly higher CO2 permeability in comparison to the pristine membrane. In order to have further insight into the role of fillers, diffusion and solubility measurements were carried out. The results showed that the presence of highly porous fillers resulted in increasing the diffusion coefficient while the solubility showed a slight drop. The RTIL filled membranes showed higher CO2/CH4 and CO2/N2 selectivity than unfilled membranes while the permeability dropped slightly. The increase in selectivity was due to the highly selective RTIL used in this work. The study revealed that RTIL filled mixed matrix membranes are an interesting candidate for gas separation membranes.

Keywords: ionic liquids, CO2 separation, membranes, mixed matrix membranes

Procedia PDF Downloads 464
462 An Analysis of the Recent Flood Scenario (2017) of the Southern Districts of the State of West Bengal, India

Authors: Soumita Banerjee

Abstract:

The State of West Bengal is mostly watered by innumerable rivers, and they are different in nature in both the northern and the southern part of the state. The southern part of West Bengal is mainly drained with the river Bhagirathi-Hooghly, and its major distributaries and tributaries have divided this major river basin into many subparts like the Ichamati-Bidyadhari, Pagla-Bansloi, Mayurakshi-Babla, Ajay, Damodar, Kangsabati Sub-basin to name a few. These rivers basically drain the Districts of Bankura, Burdwan, Hooghly, Nadia and Purulia, Birbhum, Midnapore, Murshidabad, North 24-Parganas, Kolkata, Howrah and South 24-Parganas. West Bengal has a huge number of flood-prone blocks in the southern part of the state of West Bengal, the responsible factors for flood situation are the shape and size of the catchment area, its steep gradient starting from plateau to flat terrain, the river bank erosion and its siltation, tidal condition especially in the lower Ganga Basin and very low maintenance of the embankments which are mostly used as communication links. Along with these factors, DVC (Damodar Valley Corporation) plays an important role in the generation (with the release of water) and controlling the flood situation. This year the whole Gangetic West Bengal is being flooded due to high intensity and long duration rainfall, and the release of water from the Durgapur Barrage As most of the rivers are interstate in nature at times floods also take place with release of water from the dams of the neighbouring states like Jharkhand. Other than Embankments, there is no such structural measures for combatting flood in West Bengal. This paper tries to analyse the reasons behind the flood situation this year especially with the help of climatic data collected from the Indian Metrological Department, flood related data from the Irrigation and Waterways Department, West Bengal and GPM (General Precipitation Measurement) data for rainfall analysis. Based on the threshold value derived from the calculation of the past available flood data, it is possible to predict the flood events which may occur in the near future and with the help of social media it can be spread out within a very short span of time to aware the mass. On a larger or a governmental scale, heightening the settlements situated on the either banks of the river can yield a better result than building up embankments.

Keywords: dam failure, embankments, flood, rainfall

Procedia PDF Downloads 213
461 Rheometer Enabled Study of Tissue/biomaterial Frequency-Dependent Properties

Authors: Polina Prokopovich

Abstract:

Despite the well-established dependence of cartilage mechanical properties on the frequency of the applied load, most research in the field is carried out in either load-free or constant load conditions because of the complexity of the equipment required for the determination of time-dependent properties. These simpler analyses provide a limited representation of cartilage properties thus greatly reducing the impact of the information gathered hindering the understanding of the mechanisms involved in this tissue replacement, development and pathology. More complex techniques could represent better investigative methods, but their uptake in cartilage research is limited by the highly specialised training required and cost of the equipment. There is, therefore, a clear need for alternative experimental approaches to cartilage testing to be deployed in research and clinical settings using more user-friendly and financial accessible devices. Frequency dependent material properties can be determined through rheometry that is an easy to use requiring a relatively inexpensive device; we present how a commercial rheometer can be adapted to determine the viscoelastic properties of articular cartilage. Frequency-sweep tests were run at various applied normal loads on immature, mature and trypsinased (as model of osteoarthritis) cartilage samples to determine the dynamic shear moduli (G*, G′ G″) of the tissues. Moduli increased with increasing frequency and applied load; mature cartilage had generally the highest moduli and GAG depleted samples the lowest. Hydraulic permeability (KH) was estimated from the rheological data and decreased with applied load; GAG depleted cartilage exhibited higher hydraulic permeability than either immature or mature tissues. The rheometer-based methodology developed was validated by the close comparison of the rheometer-obtained cartilage characteristics (G*, G′, G″, KH) with results obtained with more complex testing techniques available in literature. Rheometry is relatively simpler and does not require highly capital intensive machinery and staff training is more accessible; thus the use of a rheometer would represent a cost-effective approach for the determination of frequency-dependent properties of cartilage for more comprehensive and impactful results for both healthcare professional and R&D.

Keywords: tissue, rheometer, biomaterial, cartilage

Procedia PDF Downloads 63
460 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study

Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu

Abstract:

Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.

Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm

Procedia PDF Downloads 127
459 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System

Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu

Abstract:

Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.

Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model

Procedia PDF Downloads 96
458 Assessing the Effect of Urban Growth on Land Surface Temperature: A Case Study of Conakry Guinea

Authors: Arafan Traore, Teiji Watanabe

Abstract:

Conakry, the capital city of the Republic of Guinea, has experienced a rapid urban expansion and population increased in the last two decades, which has resulted in remarkable local weather and climate change, raise energy demand and pollution and treating social, economic and environmental development. In this study, the spatiotemporal variation of the land surface temperature (LST) is retrieved to characterize the effect of urban growth on the thermal environment and quantify its relationship with biophysical indices, a normalized difference vegetation index (NDVI) and a normalized difference built up Index (NDBI). Landsat data TM and OLI/TIRS acquired respectively in 1986, 2000 and 2016 were used for LST retrieval and Land use/cover change analysis. A quantitative analysis based on the integration of a remote sensing and a geography information system (GIS) has revealed an important increased in the LST pattern in the average from 25.21°C in 1986 to 27.06°C in 2000 and 29.34°C in 2016, which was quite eminent with an average gain in surface temperature of 4.13°C over 30 years study period. Additionally, an analysis using a Pearson correlation (r) between (LST) and the biophysical indices, normalized difference vegetation index (NDVI) and a normalized difference built-up Index (NDBI) has revealed a negative relationship between LST and NDVI and a strong positive relationship between LST and NDBI. Which implies that an increase in the NDVI value can reduce the LST intensity; conversely increase in NDBI value may strengthen LST intensity in the study area. Although Landsat data were found efficient in assessing the thermal environment in Conakry, however, the method needs to be refined with in situ measurements of LST in the future studies. The results of this study may assist urban planners, scientists and policies makers concerned about climate variability to make decisions that will enhance sustainable environmental practices in Conakry.

Keywords: Conakry, land surface temperature, urban heat island, geography information system, remote sensing, land use/cover change

Procedia PDF Downloads 230
457 The Role of the Basel Accords in Mitigating Systemic Risk

Authors: Wassamon Kun-Amornpong

Abstract:

When a financial crisis occurs, there will be a law and regulatory reform in order to manage the turmoil and prevent a future crisis. One of the most important regulatory efforts to help cope with systemic risk and a financial crisis is the third version of the Basel Accord. Basel III has introduced some measures and tools (e.g., systemic risk buffer, countercyclical buffer, capital conservation buffer and liquidity risk) in order to mitigate systemic risk. Nevertheless, the effectiveness of these measures in Basel III in adequately addressing the problem of contagious runs that can quickly spread throughout the financial system is questionable. This paper seeks to contribute to the knowledge regarding the role of the Basel Accords in mitigating systemic risk. The research question is to what extent the Basel Accords can help control systemic risk in the financial markets? The paper tackles this question by analysing the concept of systemic risk. It will then examine the weaknesses of the Basel Accords before and after the Global financial crisis in 2008. Finally, it will suggest some possible solutions in order to improve the Basel Accord. The rationale of the study is the fact that academic works on systemic risk and financial crises are largely studied from economic or financial perspective. There is comparatively little research from the legal and regulatory perspective. The finding of the paper is that there are some problems in all of the three pillars of the Basel Accords. With regards to Pillar I, the risk model is excessively complex while the benefits of its complexity are doubtful. Concerning Pillar II, the effectiveness of the risk-based supervision in preventing systemic risk still depends largely upon its design and implementation. Factors such as organizational culture of the regulator and the political context within which the risk-based supervision operates might be a barrier against the success of Pillar II. Meanwhile, Pillar III could not provide adequate market discipline as market participants do not always act in a rational way. In addition, the too-big-to-fail perception reduced the incentives of the market participants to monitor risks. There has been some development in resolution measure (e.g. TLAC and MREL) which might potentially help strengthen the incentive of the market participants to monitor risks. However, those measures have some weaknesses. The paper argues that if the weaknesses in the three pillars are resolved, it can be expected that the Basel Accord could contribute to the mitigation of systemic risk in a more significant way in the future.

Keywords: Basel accords, financial regulation, risk-based supervision, systemic risk

Procedia PDF Downloads 116
456 Physical Planning Strategies for Disaster Mitigation and Preparedness in Coastal Region of Andhra Pradesh, India

Authors: Thimma Reddy Pothireddy, Ramesh Srikonda

Abstract:

India is prone to natural disasters such as Floods, droughts, cyclones, earthquakes and landslides frequently due to its geographical considerations. It has become a persistent phenomenon as observed in last ten decades. The recent survey indicates that about 60% of the landmass is prone to earthquakes of various intensities with reference to Richard scale, over 40 million hectares is prone to floods; about 8% of the total area is prone to cyclones and 68% of the area is vulnerable to drought. Climate change is likely to be perceived through the experience of extreme weather events. There is growing societal concern about climate change, given the potential impacts of associated natural hazards such as cyclones, flooding, earthquakes, landslides etc. The recent natural calamities such as Cyclone Hudhud had crossed the land at Northern cost of AP, Vishakapatanam on 12 Oct’2014 with a wind speed ranging between 175 – 200 kmph and the records show that the tidal waves were reached to the height of 14mts and above; and it alarms us to have critical focus on planning issues so as to find appropriate solutions. The existing condition is effective is in terms of institutional set up along with responsive management mechanism of disaster mitigation but considerations at settlement planning level to allow mitigation operations are not adequate. This paper deals to understand the response to climate change will possibly happen through adaptation to climate hazards and essential to work out an appropriate mechanism and disaster receptive settlement planning for responding to natural (and climate-related) calamities particularly to cyclones and floods. The statistics indicate that 40 million hectares flood prone (5% of area), and 1853 kmts of cyclone prone coastal length in India so it is essential and crucial to have appropriate physical planning considerations to improve preparedness and to operate mitigation measures effectively to minimize the loss and damage. Vijayawada capital region which is susceptible to cyclonic and floods has been studied with respect to trajectory analysis to work out risk vulnerability and to integrated disaster mitigation physical planning considerations.

Keywords: meta analysis, vulnerability index, physical planning, trajectories

Procedia PDF Downloads 237
455 Evaluation of a Remanufacturing for Lithium Ion Batteries from Electric Cars

Authors: Achim Kampker, Heiner H. Heimes, Mathias Ordung, Christoph Lienemann, Ansgar Hollah, Nemanja Sarovic

Abstract:

Electric cars with their fast innovation cycles and their disruptive character offer a high degree of freedom regarding innovative design for remanufacturing. Remanufacturing increases not only the resource but also the economic efficiency by a prolonged product life time. The reduced power train wear of electric cars combined with high manufacturing costs for batteries allow new business models and even second life applications. Modular and intermountable designed battery packs enable the replacement of defective or outdated battery cells, allow additional cost savings and a prolongation of life time. This paper discusses opportunities for future remanufacturing value chains of electric cars and their battery components and how to address their potentials with elaborate designs. Based on a brief overview of implemented remanufacturing structures in different industries, opportunities of transferability are evaluated. In addition to an analysis of current and upcoming challenges, promising perspectives for a sustainable electric car circular economy enabled by design for remanufacturing are deduced. Two mathematical models describe the feasibility of pursuing a circular economy of lithium ion batteries and evaluate remanufacturing in terms of sustainability and economic efficiency. Taking into consideration not only labor and material cost but also capital costs for equipment and factory facilities to support the remanufacturing process, cost benefit analysis prognosticate that a remanufacturing battery can be produced more cost-efficiently. The ecological benefits were calculated on a broad database from different research projects which focus on the recycling, the second use and the assembly of lithium ion batteries. The results of this calculations show a significant improvement by remanufacturing in all relevant factors especially in the consumption of resources and greenhouse warming potential. Exemplarily suitable design guidelines for future remanufacturing lithium ion batteries, which consider modularity, interfaces and disassembly, are used to illustrate the findings. For one guideline, potential cost improvements were calculated and upcoming challenges are pointed out.

Keywords: circular economy, electric mobility, lithium ion batteries, remanufacturing

Procedia PDF Downloads 344
454 The Moderating Role of the Employees' Green Lifestyle to the Effect of Green Human Resource Management Practices to Job Performance: A Structural Equation Model (SEM)

Authors: Lorraine Joyce Chua, Sheena Fatima Ragas, Flora Mae Tantay, Carolyn Marie Sunio

Abstract:

The Philippines is one of the countries most affected by weather-related disasters. The occurrence of natural disasters in this country increases due to environmental degradation making environment preservation a growing trend in the society including the corporate world. Most organizations implemented green practices in order to lower expenses unaware that some of these practices were already a part of a new trend in human resource management known as Green Human Resource Management (GHRM). GHRM is when business organizations implement HR policies programs processes and techniques that bring environmental impact and sustainability practices on the organization. In relation to this, the study hypothesizes that implementing GHRM practices in the workplace will spillover to an employees lifestyle and such lifestyle may moderate the impact of GHRM practices to his job performance. Private industries located in the Philippines National Capital Region (NCR) were purposively selected for the purpose of this study. They must be ISO14001 certified or are currently aiming for such certification. The employee respondents were randomly selected and were asked to answer a reliable and valid researcher-made questionnaire. Structural equation modeling (SEM) supported the hypothesis that GHRM practices may spillover to employees lifestyle stimulating such individual to start a green lifestyle which moderates the impact of GHRM to his job performance. It can also be implied that GHRM practices help shape employees to become environmentally aware and responsible which may help them in preserving the environment. The findings of this study may encourage Human Resource practitioners to implement GHRM practices in the workplace in order to take part in sustaining the environment while maintaining or improving employees job performance and keeping them motivated. This study can serve as a basis for future research regarding the importance of strengthening the GHRM implementation here in the Philippines. Future studies may focus more on the impact of GHRM to other factors, such as job loyalty and job satisfaction of the employees belonging to specific industries which would greatly contribute to the GHRM community in the Philippines.

Keywords: GHRM practices, Green Human Resource Management, Green Lifestyle, ISO14001, job performance, Philippines

Procedia PDF Downloads 254
453 Performance Analysis of the Precise Point Positioning Data Online Processing Service and Using for Monitoring Plate Tectonic of Thailand

Authors: Nateepat Srivarom, Weng Jingnong, Serm Chinnarat

Abstract:

Precise Point Positioning (PPP) technique is use to improve accuracy by using precise satellite orbit and clock correction data, but this technique is complicated methods and high costs. Currently, there are several online processing service providers which offer simplified calculation. In the first part of this research, we compare the efficiency and precision of four software. There are three popular online processing service providers: Australian Online GPS Processing Service (AUSPOS), CSRS-Precise Point Positioning and CenterPoint RTX post processing by Trimble and 1 offline software, RTKLIB, which collected data from 10 the International GNSS Service (IGS) stations for 10 days. The results indicated that AUSPOS has the least distance root mean square (DRMS) value of 0.0029 which is good enough to be calculated for monitoring the movement of tectonic plates. The second, we use AUSPOS to process the data of geodetic network of Thailand. In December 26, 2004, the earthquake occurred a 9.3 MW at the north of Sumatra that highly affected all nearby countries, including Thailand. Earthquake effects have led to errors of the coordinate system of Thailand. The Royal Thai Survey Department (RTSD) is primarily responsible for monitoring of the crustal movement of the country. The difference of the geodetic network movement is not the same network and relatively large. This result is needed for survey to continue to improve GPS coordinates system in every year. Therefore, in this research we chose the AUSPOS to calculate the magnitude and direction of movement, to improve coordinates adjustment of the geodetic network consisting of 19 pins in Thailand during October 2013 to November 2017. Finally, results are displayed on the simulation map by using the ArcMap program with the Inverse Distance Weighting (IDW) method. The pin with the maximum movement is pin no. 3239 (Tak) in the northern part of Thailand. This pin moved in the south-western direction to 11.04 cm. Meanwhile, the directional movement of the other pins in the south gradually changed from south-west to south-east, i.e., in the direction noticed before the earthquake. The magnitude of the movement is in the range of 4 - 7 cm, implying small impact of the earthquake. However, the GPS network should be continuously surveyed in order to secure accuracy of the geodetic network of Thailand.

Keywords: precise point positioning, online processing service, geodetic network, inverse distance weighting

Procedia PDF Downloads 180
452 Modulating Photoelectrochemical Water-Splitting Activity by Charge-Storage Capacity of Electrocatalysts

Authors: Yawen Dai, Ping Cheng, Jian Ru Gong

Abstract:

Photoelctrochemical (PEC) water splitting using semiconductors (SCs) provides a convenient way to convert sustainable but intermittent solar energy into clean hydrogen energy, and it has been regarded as one of most promising technology to solve the energy crisis and environmental pollution in modern society. However, the record energy conversion efficiency of a PEC cell (~3%) is still far lower than the commercialization requirement (~10%). The sluggish kinetics of oxygen evolution reaction (OER) half reaction on photoanodes is a significant limiting factor of the PEC device efficiency, and electrocatalysts (ECs) are always deposited on SCs to accelerate the hole injection for OER. However, an active EC cannot guarantee enhanced PEC performance, since the newly emerged SC-EC interface complicates the interfacial charge behavior. Herein, α-Fe2O3 photoanodes coated with Co3O4 and CoO ECs are taken as the model system to glean fundamental understanding on the EC-dependent interfacial charge behavior. Intensity modulated photocurrent spectroscopy and electrochemical impedance spectroscopy were used to investigate the competition between interfacial charge transfer and recombination, which was found to be dominated by the charge storage capacities of ECs. The combined results indicate that both ECs can store holes and increase the hole density on photoanode surface. It is like a double-edged sword that benefit the multi-hole participated OER, as well as aggravate the SC-EC interfacial charge recombination due to the Coulomb attraction, thus leading to a nonmonotonic PEC performance variation trend with the increasing surface hole density. Co3O4 has low hole storage capacity which brings limited interfacial charge recombination, and thus the increased surface holes can be efficiently utilized for OER to generate enhanced photocurrent. In contrast, CoO has overlarge hole storage capacity that causes severe interfacial charge recombination, which hinders hole transfer to electrolyte for OER. Therefore, the PEC performance of α-Fe2O3 is improved by Co3O4 but decreased by CoO despite the similar electrocatalytic activity of the two ECs. First-principle calculation was conducted to further reveal how the charge storage capacity depends on the EC’s intrinsic property, demonstrating that the larger hole storage capacity of CoO than that of Co3O4 is determined by their Co valence states and original Fermi levels. This study raises up a new strategy to manipulate interfacial charge behavior and the resultant PEC performance by the charge storage capacity of ECs, providing insightful guidance for the interface design in PEC devices.

Keywords: charge storage capacity, electrocatalyst, interfacial charge behavior, photoelectrochemistry, water-splitting

Procedia PDF Downloads 127
451 Serum Zinc Level in Patients with Multidrug Resistant Tuberculosis

Authors: Nilima Barman, M. Atiqul Haque, Debabrata Ghosh

Abstract:

Background: Zinc, one of the vital micronutrients, has an incredible role in the immune system. Hypozincemia affects host defense by reducing the number of circulating T cells and phagocytosis activity of other cells which ultimately impair cell-mediated immunity 1, 2. The immune system is detrimentally suppressed in multidrug-resistant tuberculosis (MDR-TB) 3, 4, a major threat of TB control worldwide5. As zinc deficiency causes immune suppression, we assume that it might have a role in the development of MDR-TB. Objectives: To estimate the serum zinc level in newly diagnosed multidrug resistant tuberculosis (MDR-TB) in comparison with that of newly diagnosed pulmonary TB (NdPTB) and healthy individuals. Materials and Methods: This study was carried out in the department of Public Health and Informatics, Bangabandhu Sheikh Mujib Medical University, Dhaka in collaboration with National Institute of Diseases of the Chest Hospital (NIDCH), Bangladesh from March’ 2012 to February 2013. A total of 337 respondents, of them 107 were MDR TB patients enrolled from NIDCH, 69 were NdPTB and 161 were healthy adults. All NdPTB patients and healthy adults were randomly selected from Sirajdikhan subdistrict of Munshiganj District. It is a rural community 22 kilometer south from capital city Dhaka. Serum zinc level was estimated by atomic absorption spectrophotometry method from early morning fasting blood sample. The evaluation of serum zinc level was done according to normal range from 70 to120 µgm/dL6. Results: Males were predominant in study groups (p>0.05). Mean (sd) serum zinc levels in MDR-TB, NdPTB and healthy adult group were 65.14 (12.52), 75.22(15.89), and 87.98 (21.80) μgm/dL respectively and differences were statistically significant (F=52.08, P value<0.001). After multiple comparison test (Bonferroni test) significantly lower level of serum zinc was found in MDRTB group than NdPTB and healthy adults (p<.001). Point biserial correlation showed a negative association of having MDR TB and serum zinc level (r= -.578; p value <0.001). Conclusion: The significant low level of serum zinc in MDR-TB patients suggested impaired immune status. We recommended for further exploration of low level of serum zinc as risk factor of MDR TB.

Keywords: Bangladesh, immune status, multidrug-resistant tuberculosis, serum zinc

Procedia PDF Downloads 574
450 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia

Authors: S. Cencek, A. Markun

Abstract:

Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.

Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines

Procedia PDF Downloads 225
449 A Study on the Relationship Between Adult Videogaming and Wellbeing, Health, and Labor Supply

Authors: William Marquis, Fang Dong

Abstract:

There has been a growing concern in recent years over the economic and social effects of adult video gaming. It has been estimated that the number of people who played video games during the COVID-19 pandemic is close to three billion, and there is evidence that this form of entertainment is here to stay. Many people are concerned that this growing use of time could crowd out time that could be spent on alternative forms of entertainment with family, friends, sports, and other social activities that build community. For example, recent studies of children suggest that playing videogames crowds out time that could be spent on homework, watching TV, or in other social activities. Similar studies of adults have shown that video gaming is negatively associated with earnings, time spent at work, and socializing with others. The primary objective of this paper is to examine how time adults spend on video gaming could displace time they could spend working and on activities that enhance their health and well-being. We use data from the American Time Use Survey (ATUS), maintained by the Bureau of Labor Statistics, to analyze the effects of time-use decisions on three measures of well-being. We pool the ATUS Well-being Module for multiple years, 2010, 2012, 2013, and 2021, along with the ATUS Activity and Who files for these years. This pooled data set provides three broad measures of well-being, e.g., health, life satisfaction, and emotional well-being. Seven variants of each are used as a dependent variable in different multivariate regressions. We add to the existing literature in the following ways. First, we investigate whether the time adults spend in video gaming crowds out time spent working or in social activities that promote health and life satisfaction. Second, we investigate the relationship between adult gaming and their emotional well-being, also known as negative or positive affect, a factor that is related to depression, health, and labor market productivity. The results of this study suggest that the time adult gamers spend on video gaming has no effect on their supply of labor, a negligible effect on their time spent socializing and studying, and mixed effects on their emotional well-being, such as increasing feelings of pain and reducing feelings of happiness and stress.

Keywords: online gaming, health, social capital, emotional wellbeing

Procedia PDF Downloads 32
448 Multilevel Modelling of Modern Contraceptive Use in Nigeria: Analysis of the 2013 NDHS

Authors: Akiode Ayobami, Akiode Akinsewa, Odeku Mojisola, Salako Busola, Odutolu Omobola, Nuhu Khadija

Abstract:

Purpose: Evidence exists that family planning use can contribute to reduction in infant and maternal mortality in any country. Despite these benefits, contraceptive use in Nigeria still remains very low, only 10% among married women. Understanding factors that predict contraceptive use is very important in order to improve the situation. In this paper, we analysed data from the 2013 Nigerian Demographic and Health Survey (NDHS) to better understand predictors of contraceptive use in Nigeria. The use of logistics regression and other traditional models in this type of situation is not appropriate as they do not account for social structure influence brought about by the hierarchical nature of the data on response variable. We therefore used multilevel modelling to explore the determinants of contraceptive use in order to account for the significant variation in modern contraceptive use by socio-demographic, and other proximate variables across the different Nigerian states. Method: This data has a two-level hierarchical structure. We considered the data of 26, 403 married women of reproductive age at level 1 and nested them within the 36 states and the Federal Capital Territory, Abuja at level 2. We modelled use of modern contraceptive against demographic variables, being told about FP at health facility, heard of FP on TV, Magazine or radio, husband desire for more children nested within the state. Results: Our results showed that the independent variables in the model were significant predictors of modern contraceptive use. The estimated variance component for the null model, random intercept, and random slope models were significant (p=0.00), indicating that the variation in contraceptive use across the Nigerian states is significant, and needs to be accounted for in order to accurately determine the predictors of contraceptive use, hence the data is best fitted by the multilevel model. Only being told about family planning at the health facility and religion have a significant random effect, implying that their predictability of contraceptive use varies across the states. Conclusion and Recommendation: Results showed that providing FP information at the health facility and religion needs to be considered when programming to improve contraceptive use at the state levels.

Keywords: multilevel modelling, family planning, predictors, Nigeria

Procedia PDF Downloads 407
447 The Causes and Effects of Poor Household Sanitation: Case Study of Kansanga Parish

Authors: Rosine Angelique Uwacu

Abstract:

Poor household sanitation is rife in Uganda, especially in Kampala. This study was carried out with he goal of establishing the main causes and effects of poor household sanitation in Kansanga parish. The study objectively sought to: To identify various ways through which wastes are generated and disposed of in Kansanga parish, identify different hygiene procedures/behaviors of waste handling in Kansanga parish and assess health effects of poor household sanitation and suggest the recommended appropriate measures of addressing cases of lack of hygiene in Kansanga parish. The study used a survey method where cluster sampling was employed. This is because there is no register of population or sufficient information, or geographic distribution of individuals is widely scattered. Data was collected through the use of interviews accompanied by observation and questionnaires. The study involved a sample of 100 households. The study revealed that; some households use wheeled bin collection, skip hire and roll on/off contained others take their wastes to refuse collection vehicles. Surprisingly, majority of the households submitted that they use polythene bags 'Kavera' and at times plastic sacs to dispose of their wastes which are dumped in drainage patterns or dustbins and other illegal dumping site. The study showed that washing hands with small jerrycans after using the toilet was being adopted by most households as there were no or few other alternatives. The study revealed that the common health effects that come as a result of poor household sanitation in Kansanga Parish are diseases outbreaks such as malaria, typhoid and diarrhea. Finally, the study gave a number of recommendations or suggestions on maintaining and achieving an adequate household sanitation in Kansanga Parish such as sensitization of community members by their leaders like Local Counselors could help to improve the situation, establishment of community sanitation days for people to collectively and voluntarily carry out good sanitation practices like digging trenches, burning garbage and proper waste management and disposal. Authorities like Kampala Capital City Authority should distribute dumping containers or allocate dumping sites where people can dispose of their wastes preferably at a minimum cost for proper management.

Keywords: household sanitation, kansanga parish, Uganda, waste

Procedia PDF Downloads 177
446 Multiscale Modelling of Textile Reinforced Concrete: A Literature Review

Authors: Anicet Dansou

Abstract:

Textile reinforced concrete (TRC)is increasingly used nowadays in various fields, in particular civil engineering, where it is mainly used for the reinforcement of damaged reinforced concrete structures. TRC is a composite material composed of multi- or uni-axial textile reinforcements coupled with a fine-grained cementitious matrix. The TRC composite is an alternative solution to the traditional Fiber Reinforcement Polymer (FRP) composite. It has good mechanical performance and better temperature stability but also, it makes it possible to meet the criteria of sustainable development better.TRCs are highly anisotropic composite materials with nonlinear hardening behavior; their macroscopic behavior depends on multi-scale mechanisms. The characterization of these materials through numerical simulation has been the subject of many studies. Since TRCs are multiscale material by definition, numerical multi-scale approaches have emerged as one of the most suitable methods for the simulation of TRCs. They aim to incorporate information pertaining to microscale constitute behavior, mesoscale behavior, and macro-scale structure response within a unified model that enables rapid simulation of structures. The computational costs are hence significantly reduced compared to standard simulation at a fine scale. The fine scale information can be implicitly introduced in the macro scale model: approaches of this type are called non-classical. A representative volume element is defined, and the fine scale information are homogenized over it. Analytical and computational homogenization and nested mesh methods belong to these approaches. On the other hand, in classical approaches, the fine scale information are explicitly introduced in the macro scale model. Such approaches pertain to adaptive mesh refinement strategies, sub-modelling, domain decomposition, and multigrid methods This research presents the main principles of numerical multiscale approaches. Advantages and limitations are identified according to several criteria: the assumptions made (fidelity), the number of input parameters required, the calculation costs (efficiency), etc. A bibliographic study of recent results and advances and of the scientific obstacles to be overcome in order to achieve an effective simulation of textile reinforced concrete in civil engineering is presented. A comparative study is further carried out between several methods for the simulation of TRCs used for the structural reinforcement of reinforced concrete structures.

Keywords: composites structures, multiscale methods, numerical modeling, textile reinforced concrete

Procedia PDF Downloads 98