Search results for: cost transparency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6571

Search results for: cost transparency

571 Analysis of Distance Travelled by Plastic Consumables Used in the First 24 Hours of an Intensive Care Admission: Impacts and Methods of Mitigation

Authors: Aidan N. Smallwood, Celestine R. Weegenaar, Jack N. Evans

Abstract:

The intensive care unit (ICU) is a particularly resource heavy environment, in terms of staff, drugs and equipment required. Whilst many areas of the hospital are attempting to cut down on plastic use and minimise their impact on the environment, this has proven challenging within the confines of intensive care. Concurrently, as globalization has progressed over recent decades, there has been a tendency towards centralised manufacturing with international distribution networks for products, often covering large distances. In this study, we have modelled the standard consumption of plastic single-use items over the course of the first 24-hours of an average individual patient’s stay in a 12 bed ICU in the United Kingdom (UK). We have identified the country of manufacture and calculated the minimum possible distance travelled by each item from factory to patient. We have assumed direct transport via the shortest possible straight line from country of origin to the UK and have not accounted for transport within either country. Assuming an intubated patient with invasive haemodynamic monitoring and central venous access, there are a total of 52 distincts, largely plastic, disposable products which would reasonably be required in the first 24-hours after admission. Each product type has only been counted once to account for multiple items being shipped as one package. Travel distances from origin were summed to give the total distance combined for all 52 products. The minimum possible total distance travelled from country of origin to the UK for all types of product was 273,353 km, equivalent to 6.82 circumnavigations of the globe, or 71% of the way to the moon. The mean distance travelled was 5,256 km, approximately the distance from London to Mecca. With individual packaging for each item, the total weight of consumed products was 4.121 kg. The CO2 produced shipping these items by air freight would equate to 30.1 kg, however doing the same by sea would produce 0.2 kg CO2. Extrapolating these results to the 211,932 UK annual ICU admissions (2018-2019), even with the underestimates of distance and weight of our assumptions, air freight would account for 6586 tons CO2 emitted annually, approximately 130 times that of sea freight. Given the drive towards cost saving within the UK health service, and the decline of the local manufacturing industry, buying from intercontinental manufacturers is inevitable However, transporting all consumables by sea where feasible would be environmentally beneficial, as well as being less costly than air freight. At present, the NHS supply chain purchases from medical device companies, and there is no freely available information as to the transport mode used to deliver the product to the UK. This must be made available to purchasers in order to give a fuller picture of life cycle impact and allow for informed decision making in this regard.

Keywords: CO2, intensive care, plastic, transport

Procedia PDF Downloads 178
570 The Financial Impact of Covid 19 on the Hospitality Industry in New Zealand

Authors: Kay Fielden, Eelin Tan, Lan Nguyen

Abstract:

In this research project, data was gathered at a Covid 19 Conference held in June 2021 from industry leaders who discussed the impact of the global pandemic on the status of the New Zealand hospitality industry. Panel discussions on financials, human resources, health and safety, and recovery were conducted. The themes explored for the finance panel were customer demographics, hospitality sectors, financial practices, government impact, and cost of compliance. The aim was to see how the hospitality industry has responded to the global pandemic and the steps that have been taken for the industry to recover or sustain their business. The main research question for this qualitative study is: What are the factors that have impacted on finance for the hospitality industry in New Zealand due to Covid 19? For financials, literature has been gathered to study global effects, and this is being compared with the data gathered from the discussion panel through the lens of resilience theory. Resilience theory applied to the hospitality industry suggests that the challenges imposed by Covid 19 have been the catalyst for government initiatives, technical innovation, engaging local communities, and boosting confidence. Transformation arising from these ground shifts have been a move towards sustainability, wellbeing, more awareness of climate change, and community engagement. Initial findings suggest that there has been a shift in customer base that has prompted regional accommodation providers to realign offers and to become more flexible to attract and maintain this realigned customer base. Dynamic pricing structures have been required to meet changing customer demographics. Flexible staffing arrangements include sharing staff between different accommodation providers, owners with multiple properties adopting different staffing arrangements, maintaining a good working relationship with the bank, and conserving cash. Uncertain times necessitate changing revenue strategies to cope with external factors. Financial support offered by the government has cushioned the financial downturn for many in the hospitality industry, and managed isolation and quarantine (MIQ) arrangements have offered immediate financial relief for those hotels involved. However, there is concern over the long-term effects. Compliance with mandated health and safety requirements has meant that the hospitality industry has streamlined its approach to meeting those requirements and has invested in customer relations to keep paying customers informed of the health measures in place. Initial findings from this study lie within the resilience theory framework and are consistent with findings from the literature.

Keywords: global pandemic, hospitality industry, new Zealand, resilience

Procedia PDF Downloads 101
569 Disclosure on Adherence of the King Code's Audit Committee Guidance: Cluster Analyses to Determine Strengths and Weaknesses

Authors: Philna Coetzee, Clara Msiza

Abstract:

In modern society, audit committees are seen as the custodians of accountability and the conscience of management and the board. But who holds the audit committee accountable for their actions or non-actions and how do we know what they are supposed to be doing and what they are doing? The purpose of this article is to provide greater insight into the latter part of this problem, namely, determine what best practises for audit committees and the disclosure of what is the realities are. In countries where governance is well established, the roles and responsibilities of the audit committee are mostly clearly guided by legislation and/or guidance documents, with countries increasingly providing guidance on this topic. With high cost involved to adhere to governance guidelines, the public (for public organisations) and shareholders (for private organisations) expect to see the value of their ‘investment’. For audit committees, the dividends on the investment should reflect in less fraudulent activities, less corruption, higher efficiency and effectiveness, improved social and environmental impact, and increased profits, to name a few. If this is not the case (which is reflected in the number of fraudulent activities in both the private and the public sector), stakeholders have the right to ask: where was the audit committee? Therefore, the objective of this article is to contribute to the body of knowledge by comparing the adherence of audit committee to best practices guidelines as stipulated in the King Report across public listed companies, national and provincial government departments, state-owned enterprises and local municipalities. After constructs were formed, based on the literature, factor analyses were conducted to reduce the number of variables in each construct. Thereafter, cluster analyses, which is an explorative analysis technique that classifies a set of objects in such a way that objects that are more similar are grouped into the same group, were conducted. The SPSS TwoStep Clustering Component was used, being capable of handling both continuous and categorical variables. In the first step, a pre-clustering procedure clusters the objects into small sub-clusters, after which it clusters these sub-clusters into the desired number of clusters. The cluster analyses were conducted for each construct and the measure, namely the audit opinion as listed in the external audit report, were included. Analysing 228 organisations' information, the results indicate that there is a clear distinction between the four spheres of business that has been included in the analyses, indicating certain strengths and certain weaknesses within each sphere. The results may provide the overseers of audit committees’ insight into where a specific sector’s strengths and weaknesses lie. Audit committee chairs will be able to improve the areas where their audit committee is lacking behind. The strengthening of audit committees should result in an improvement of the accountability of boards, leading to less fraud and corruption.

Keywords: audit committee disclosure, cluster analyses, governance best practices, strengths and weaknesses

Procedia PDF Downloads 167
568 Providing Health Promotion Information by Digital Animation to International Visitors in Japan: A Factorial Design View of Nurses

Authors: Mariko Nishikawa, Masaaki Yamanaka, Ayami Kondo

Abstract:

Background: International visitors to Japan are at a risk of travel-related illnesses or injury that could result in hospitalization in a country where the language and customs are unique. Over twelve million international visitors came to Japan in 2015, and more are expected leading up to the Tokyo Olympics. One aspect of this is the potentially greater demand on healthcare services by foreign visitors. Nurses who take care of them have anxieties and concerns of their knowledge of the Japanese health system. Objectives: An effective distribution of travel-health information is vital for facilitating care for international visitors. Our research investigates whether a four-minute digital animation (Mari Info Japan), designed and developed by the authors and applied to a survey of 513 nurses who take care of foreigners daily, could clarify travel health procedures, reduce anxieties, while making it enjoyable to learn. Methodology: Respondents to a survey were divided into two groups. The intervention group watched Mari Info Japan. The control group read a standard guidebook. The participants were requested to fill a two-page questionnaire called Mari Meter-X, STAI-Y in English and mark a face scale, before and after the interventions. The questions dealt with knowledge of health promotion, the Japanese healthcare system, cultural concerns, anxieties, and attitudes in Japan. Data were collected from an intervention group (n=83) and control group (n=83) of nurses in a hospital, Japan for foreigners from February to March, 2016. We analyzed the data using Text Mining Studio for open-ended questions and JMP for statistical significance. Results: We found that the intervention group displayed more confidence and less anxiety to take care of foreign patients compared to the control group. The intervention group indicated a greater comfort after watching the animation. However, both groups were most likely to be concerned about language, the cost of medical expenses, informed consent, and choice of hospital. Conclusions: From the viewpoint of nurses, the provision of travel-health information by digital animation to international visitors to Japan was more effective than traditional methods as it helped them be better prepared to treat travel-related diseases and injury among international visitors. This study was registered number UMIN000020867. Funding: Grant–in-Aid for Challenging Exploratory Research 2010-2012 & 2014-16, Japanese Government.

Keywords: digital animation, health promotion, international visitor, Japan, nurse

Procedia PDF Downloads 307
567 Biodegradation Ability of Polycyclic Aromatic Hydrocarbon (PAHs) Degrading Bacillus cereus Strain JMG-01 Isolated from PAHs Contaminated Soil

Authors: Momita Das, Sofia Banu, Jibon Kotoky

Abstract:

Environmental contamination of natural resources with persistent organic pollutants is of great world-wide apprehension. Polycyclic aromatic hydrocarbons (PAHs) are among the organic pollutants, released due to various anthropogenic activities. Due to their toxic, carcinogenic and mutagenic properties, PAHs are of environmental and human concern. Presently, bioremediation has evolved as the most promising biotechnology for cleanup of such contaminants because of its economical and less cost effectiveness. In the present study, distribution of 16 USEPA priority PAHs was determined in the soil samples collected from fifteen different sites of Guwahati City, the Gateway of the North East Region of India. The total concentrations of 16 PAHs (Σ16 PAHs) ranged from 42.7-742.3 µg/g. Higher concentration of total PAHs was found more in the Industrial areas compared to all the sites (742.3 µg/g and 628 µg/g). It is noted that among all the PAHs, Naphthalene, Acenaphthylene, Anthracene, Fluoranthene, Chrysene and Benzo(a)Pyrene were the most available and contain the higher concentration of all the PAHs. Since microbial activity has been deemed the most influential and significant cause of PAH removal; further, twenty-three bacteria were isolated from the most contaminated sites using the enrichment process. These strains were acclimatized to utilize naphthalene and anthracene, each at 100 µg/g concentration as sole carbon source. Among them, one Gram-positive strain (JMG-01) was selected, and biodegradation ability and initial catabolic genes of PAHs degradation were investigated. Based on 16S rDNA analysis, the isolate was identified as Bacillus cereus strain JMG-01. Topographic images obtained using Scanning Electron Microscope (SEM) and Atomic Force Microscope (AFM) at scheduled time intervals of 7, 14 and 21 days, determined the variation in cell morphology during the period of degradation. AFM and SEM micrograph of biomass showed high filamentous growth leading to aggregation of cells in the form of biofilm with reference to the incubation period. The percentage degradation analysis using gas chromatography and mass analyses (GC-MS) suggested that more than 95% of the PAHs degraded when the concentration was at 500 µg/g. Naphthalene, naphthalene-2-methy, benzaldehyde-4-propyl, 1, 2, benzene di-carboxylic acid and benzene acetic acid were the major metabolites produced after degradation. Moreover, PCR experiments with specific primers for catabolic genes, ndo B and Cat A suggested that JMG-01 possess genes for PAHs degradation. Thus, the study concludes that Bacillus cereus strain JMG-01 has efficient biodegrading ability and can trigger the clean-up of PAHs contaminated soil.

Keywords: AFM, Bacillus cereus strain JMG-01, degradation, polycyclic aromatic hydrocarbon, SEM

Procedia PDF Downloads 276
566 Lightweight Sheet Molding Compound Composites by Coating Glass Fiber with Cellulose Nanocrystals

Authors: Amir Asadi, Karim Habib, Robert J. Moon, Kyriaki Kalaitzidou

Abstract:

There has been considerable interest in cellulose nanomaterials (CN) as polymer and polymer composites reinforcement due to their high specific modulus and strength, low density and toxicity, and accessible hydroxyl side groups that can be readily chemically modified. The focus of this study is making lightweight composites for better fuel efficiency and lower CO2 emission in auto industries with no compromise on mechanical performance using a scalable technique that can be easily integrated in sheet molding compound (SMC) manufacturing lines. Light weighting will be achieved by replacing part of the heavier components, i.e. glass fibers (GF), with a small amount of cellulose nanocrystals (CNC) in short GF/epoxy composites made using SMC. CNC will be introduced as coating of the GF rovings prior to their use in the SMC line. The employed coating method is similar to the fiber sizing technique commonly used and thus it can be easily scaled and integrated to industrial SMC lines. This will be an alternative route to the most techniques that involve dispersing CN in polymer matrix, in which the nanomaterials agglomeration limits the capability for scaling up in an industrial production. We have demonstrated that incorporating CNC as a coating on GF surface by immersing the GF in CNC aqueous suspensions, a simple and scalable technique, increases the interfacial shear strength (IFSS) by ~69% compared to the composites produced by uncoated GF, suggesting an enhancement of stress transfer across the GF/matrix interface. As a result of IFSS enhancement, incorporation of 0.17 wt% CNC in the composite results in increases of ~10% in both elastic modulus and tensile strength, and 40 % and 43 % in flexural modulus and strength respectively. We have also determined that dispersing 1.4 and 2 wt% CNC in the epoxy matrix of short GF/epoxy SMC composites by sonication allows removing 10 wt% GF with no penalty on tensile and flexural properties leading to 7.5% lighter composites. Although sonication is a scalable technique, it is not quite as simple and inexpensive as coating the GF by passing through an aqueous suspension of CNC. In this study, the above findings are integrated to 1) investigate the effect of CNC content on mechanical properties by passing the GF rovings through CNC aqueous suspension with various concentrations (0-5%) and 2) determine the optimum ratio of the added CNC to the removed GF to achieve the maximum possible weight reduction with no cost on mechanical performance of the SMC composites. The results of this study are of industrial relevance, providing a path toward producing high volume lightweight and mechanically enhanced SMC composites using cellulose nanomaterials.

Keywords: cellulose nanocrystals, light weight polymer-matrix composites, mechanical properties, sheet molding compound (SMC)

Procedia PDF Downloads 225
565 Preparation and Chemical Characterization of Eco-Friendly Activated Carbon Produced from Apricot Stones

Authors: Sabolč Pap, Srđana Kolaković, Jelena Radonić, Ivana Mihajlović, Dragan Adamović, Mirjana Vojinović Miloradov, Maja Turk Sekulić

Abstract:

Activated carbon is one of the most used and tested adsorbents in the removal of industrial organic compounds, heavy metals, pharmaceuticals and dyes. Different types of lignocellulosic materials were used as potential precursors in the production of low cost activated carbon. There are, two different processes for the preparation and production of activated carbon: physical and chemical. Chemical activation includes impregnating the lignocellulosic raw materials with chemical agents (H3PO4, HNO3, H2SO4 and NaOH). After impregnation, the materials are carbonized and washed to eliminate the residues. The chemical activation, which was used in this study, has two important advantages when compared to the physical activation. The first advantage is the lower temperature at which the process is conducted, and the second is that the yield (mass efficiency of activation) of the chemical activation tends to be greater. Preparation of activated carbon included the following steps: apricot stones were crushed in a mill and washed with distilled water. Later, the fruit stones were impregnated with a solution of 50% H3PO4. After impregnation, the solution was filtered to remove the residual acid. Subsequently impregnated samples were air dried at room temperature. The samples were placed in a furnace and heated (10 °C/min) to the final carbonization temperature of 500 °C for 2 h without the use of nitrogen. After cooling, the adsorbent was washed with distilled water to achieve acid free conditions and its pH was monitored until the filtrate pH value exceeded 4. Chemical characterizations of the prepared activated carbon were analyzed by FTIR spectroscopy. FTIR spectra were recorded with a (Thermo Nicolet Nexus 670 FTIR) spectrometer, from 400 to 4000 cm-1 wavenumbers, identifying the functional groups on the surface of the activated carbon. The FTIR spectra of adsorbent showed a broad band at 3405.91 cm-1 due to O–H stretching vibration and a peak at 489.00 cm-1 due to O–H bending vibration. Peaks between the range of 3700 and 3200 cm−1 represent the overlapping peaks of stretching vibrations of O–H and N–H groups. The distinct absorption peaks at 2919.86 cm−1 and 2848.24 cm−1 could be assigned to -CH stretching vibrations of –CH2 and –CH3 functional groups. The adsorption peak at 1566.38 cm−1 could be characterized by primary and secondary amide bands. The sharp bond within 1164.76 – 987.86 cm−1 is attributed to the C–O groups, which confirms the lignin structure of the activated carbon. The present study has shown that the activated carbons prepared from apricot stone have a functional group on their surface, which can positively affect the adsorption characteristics with this material.

Keywords: activated carbon, FTIR, H3PO4, lignocellulosic raw materials

Procedia PDF Downloads 250
564 Industrial Hemp Agronomy and Fibre Value Chain in Pakistan: Current Progress, Challenges, and Prospects

Authors: Saddam Hussain, Ghadeer Mohsen Albadrani

Abstract:

Pakistan is one of the most vulnerable countries to climate change. Being a country where 23% of the country’s GDP relies on agriculture, this is a serious cause of concern. Introducing industrial hemp in Pakistan can help build climate resilience in the agricultural sector of the country, as hemp has recently emerged as a sustainable, eco-friendly, resource-efficient, and climate-resilient crop globally. Hemp has the potential to absorb huge amounts of CO₂, nourish the soil, and be used to create various biodegradable and eco-friendly products. Hemp is twice as effective as trees at absorbing and locking up carbon, with 1 hectare (2.5 acres) of hemp reckoned to absorb 8 to 22 tonnes of CO₂ a year, more than any woodland. Along with its high carbon-sequestration ability, it produces higher biomass and can be successfully grown as a cover crop. Hemp can grow in almost all soil conditions and does not require pesticides. It has fast-growing qualities and needs only 120 days to be ready for harvest. Compared with cotton, hemp requires 50% less water to grow and can produce three times higher fiber yield with a lower ecological footprint. Recently, the Government of Pakistan has allowed the cultivation of industrial hemp for industrial and medicinal purposes, making it possible for hemp to be reinserted into the country’s economy. Pakistan’s agro-climatic and edaphic conditions are well-suitable to produce industrial hemp, and its cultivation can bring economic benefits to the country. Pakistan can enter global markets as a new exporter of hemp products. The production of hemp in Pakistan can be most exciting to the workforce, especially for farmers participating in hemp markets. The minimum production cost of hemp makes it affordable to small holding farmers, especially those who need their cropping system to be as highly sustainable as possible. Dr. Saddam Hussain is leading the first pilot project of Industrial Hemp in Pakistan. In the past three years, he has been able to recruit high-impact research grants on industrial hemp as Principal Investigator. He has already screened the non-toxic hemp genotypes, tested the adaptability of exotic material in various agroecological conditions, formulated the production agronomy, and successfully developed the complete value chain. He has developed prototypes (fabric, denim, knitwear) using hemp fibre in collaboration with industrial partners and has optimized the indigenous fibre processing techniques. In this lecture, Dr. Hussain will talk on hemp agronomy and its complete fibre value chain. He will discuss the current progress, and will highlight the major challenges and future research direction on hemp research.

Keywords: industrial hemp, agricultural sustainability, agronomic evaluation, hemp value chain

Procedia PDF Downloads 81
563 Spare Part Carbon Footprint Reduction with Reman Applications

Authors: Enes Huylu, Sude Erkin, Nur A. Özdemir, Hatice K. Güney, Cemre S. Atılgan, Hüseyin Y. Altıntaş, Aysemin Top, Muammer Yılman, Özak Durmuş

Abstract:

Remanufacturing (reman) applications allow manufacturers to contribute to the circular economy and help to introduce products with almost the same quality, environment-friendly, and lower cost. The objective of this study is to present that the carbon footprint of automotive spare parts used in vehicles could be reduced by reman applications based on Life Cycle Analysis which was framed with ISO 14040 principles. In that case, it was aimed to investigate reman applications for 21 parts in total. So far, research and calculations have been completed for the alternator, turbocharger, starter motor, compressor, manual transmission, auto transmission, and DPF (diesel particulate filter) parts, respectively. Since the aim of Ford Motor Company and Ford OTOSAN is to achieve net zero based on Science-Based Targets (SBT) and the Green Deal that the European Union sets out to make it climate neutral by 2050, the effects of reman applications are researched. In this case, firstly, remanufacturing articles available in the literature were searched based on the yearly high volume of spare parts sold. Paper review results related to their material composition and emissions released during incoming production and remanufacturing phases, the base part has been selected to take it as a reference. Then, the data of the selected base part from the research are used to make an approximate estimation of the carbon footprint reduction of the relevant part used in Ford OTOSAN. The estimation model is based on the weight, and material composition of the referenced paper reman activity. As a result of this study, it was seen that remanufacturing applications are feasible to apply technically and environmentally since it has significant effects on reducing the emissions released during the production phase of the vehicle components. For this reason, the research and calculations of the total number of targeted products in yearly volume have been completed to a large extent. Thus, based on the targeted parts whose research has been completed, in line with the net zero targets of Ford Motor Company and Ford OTOSAN by 2050, if remanufacturing applications are preferred instead of recent production methods, it is possible to reduce a significant amount of the associated greenhouse gas (GHG) emissions of spare parts used in vehicles. Besides, it is observed that remanufacturing helps to reduce the waste stream and causes less pollution than making products from raw materials by reusing the automotive components.

Keywords: greenhouse gas emissions, net zero targets, remanufacturing, spare parts, sustainability

Procedia PDF Downloads 82
562 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites

Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan

Abstract:

All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.

Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite

Procedia PDF Downloads 100
561 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China

Authors: Feng Yue, Fei Dai

Abstract:

With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.

Keywords: landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture

Procedia PDF Downloads 165
560 Price Control: A Comprehensive Step to Control Corruption in the Society

Authors: Muhammad Zia Ullah Baig, Atiq Uz Zama

Abstract:

The motivation of the project is to facilitate the governance body, as well as the common man in his/her daily life consuming product rates, to easily monitor the expense, to control the budget with the help of single SMS (message), e-mail facility, and to manage governance body by task management system. The system will also be capable of finding irregularities being done by the concerned department in mitigating the complaints generated by the customer and also provide a solution to overcome problems. We are building a system that easily controls the price control system of any country, we will feeling proud to give this system free of cost to Indian Government also. The system is able to easily manage and control the price control department of government all over the country. Price control department run in different cities under City District Government, so the system easily run in different cities with different SMS Code and decentralize Database ensure the non-functional requirement of system (scalability, reliability, availability, security, safety). The customer request for the government official price list with respect to his/her city SMS code (price list of all city available on website or application), the server will forward the price list through a SMS, if the product is not available according to the price list the customer generate a complaint through an SMS or using website/smartphone application, complaint is registered in complaint database and forward to inspection department when the complaint is entertained, the inspection department will forward a message about the complaint to customer. Inspection department physically checks the seller who does not follow the price list, but the major issue of the system is corruption, may be inspection officer will take a bribe and resolve the complaint (complaint is fake) in that case the customer will not use the system. The major issue of the system is to distinguish the fake and real complain and fight for corruption in the department. To counter the corruption, our strategy is to rank the complain if the same type of complaint is generated the complaint is in high rank and the higher authority will also notify about that complain, now the higher authority of department have reviewed the complaint and its history, the officer who resolve that complaint in past and the action against the complaint, these data will help in decision-making process, if the complaint was resolved because the officer takes bribe, the higher authority will take action against that officer. When the price of any good is decided the market/former representative is also there, with the mutual understanding of both party the price is decided, the system facilitate the decision-making process. The system shows the price history of any goods, inflation rate, available supply, demand, and the gap between supply and demand, these data will help to allot for the decision-making process.

Keywords: price control, goods, government, inspection, department, customer, employees

Procedia PDF Downloads 411
559 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale

Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal

Abstract:

Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.

Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable

Procedia PDF Downloads 302
558 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 173
557 Evaluating the Benefits of Intelligent Acoustic Technology in Classrooms: A Case Study

Authors: Megan Burfoot, Ali GhaffarianHoseini, Nicola Naismith, Amirhosein GhaffarianHoseini

Abstract:

Intelligent Acoustic Technology (IAT) is a novel architectural device used in buildings to automatically vary the acoustic conditions of space. IAT is realized by integrating two components: Variable Acoustic Technology (VAT) and an intelligent system. The VAT passively alters the RT by changing the total sound absorption in a room. In doing so, the Reverberation Time (RT) is changed and thus, the sound strength and clarity are altered. The intelligent system detects sound waves in real-time to identify the aural situation, and the RT is adjusted accordingly based on pre-programmed algorithms. IAT - the synthesis of these two components - can dramatically improve acoustic comfort, as the acoustic condition is automatically optimized for any detected aural situation. This paper presents an evaluation of the improvements of acoustic comfort in an existing tertiary classroom located at Auckland University of Technology in New Zealand. This is a pilot case study, the first of its’ kind attempting to quantify the benefits of IAT. Naturally, the potential acoustic improvements from IAT can be actualized by only installing the VAT component of IAT and by manually adjusting it rather than utilizing an intelligent system. Such a simplified methodology is adopted for this case study to understand the potential significance of IAT without adopting a time and cost-intensive strategy. For this study, the VAT is built by overlaying reflective, rotating louvers over sound absorption panels. RT's are measured according to international standards before and after installing VAT in the classroom. The louvers are manually rotated in increments by the experimenter and further RT measurements are recorded. The results are compared with recommended guidelines and reference values from national standards for spaces intended for speech and communication. The results obtained from the measurements are used to quantify the potential improvements in classroom acoustic comfort, where IAT to be used. This evaluation reveals the current existence of poor acoustic conditions in the classroom caused by high RT's. The poor acoustics are also largely attributed to the classrooms’ inability to vary acoustic parameters for changing aural situations. The classroom experiences one static acoustic state, neglecting to recognize the nature of classrooms as flexible, dynamic spaces. Evidently, when using VAT the classroom is prescribed with a wide range of RTs it can achieve. Namely, acoustic requirements for varying teaching approaches are satisfied, and acoustic comfort is improved. By quantifying the benefits of using VAT, it can confidently suggest these same benefits are achieved with IAT. Nevertheless, it is encouraged that future studies continue this line of research toward the eventual development of IAT and its’ acceptance into mainstream architecture.

Keywords: acoustic comfort, classroom acoustics, intelligent acoustics, variable acoustics

Procedia PDF Downloads 189
556 Predicting Growth of Eucalyptus Marginata in a Mediterranean Climate Using an Individual-Based Modelling Approach

Authors: S.K. Bhandari, E. Veneklaas, L. McCaw, R. Mazanec, K. Whitford, M. Renton

Abstract:

Eucalyptus marginata, E. diversicolor and Corymbia calophylla form widespread forests in south-west Western Australia (SWWA). These forests have economic and ecological importance, and therefore, tree growth and sustainable management are of high priority. This paper aimed to analyse and model the growth of these species at both stand and individual levels, but this presentation will focus on predicting the growth of E. Marginata at the individual tree level. More specifically, the study wanted to investigate how well individual E. marginata tree growth could be predicted by considering the diameter and height of the tree at the start of the growth period, and whether this prediction could be improved by also accounting for the competition from neighbouring trees in different ways. The study also wanted to investigate how many neighbouring trees or what neighbourhood distance needed to be considered when accounting for competition. To achieve this aim, the Pearson correlation coefficient was examined among competition indices (CIs), between CIs and dbh growth, and selected the competition index that can best predict the diameter growth of individual trees of E. marginata forest managed under different thinning regimes at Inglehope in SWWA. Furthermore, individual tree growth models were developed using simple linear regression, multiple linear regression, and linear mixed effect modelling approaches. Individual tree growth models were developed for thinned and unthinned stand separately. The developed models were validated using two approaches. In the first approach, models were validated using a subset of data that was not used in model fitting. In the second approach, the model of the one growth period was validated with the data of another growth period. Tree size (diameter and height) was a significant predictor of growth. This prediction was improved when the competition was included in the model. The fit statistic (coefficient of determination) of the model ranged from 0.31 to 0.68. The model with spatial competition indices validated as being more accurate than with non-spatial indices. The model prediction can be optimized if 10 to 15 competitors (by number) or competitors within ~10 m (by distance) from the base of the subject tree are included in the model, which can reduce the time and cost of collecting the information about the competitors. As competition from neighbours was a significant predictor with a negative effect on growth, it is recommended including neighbourhood competition when predicting growth and considering thinning treatments to minimize the effect of competition on growth. These model approaches are likely to be useful tools for the conservations and sustainable management of forests of E. marginata in SWWA. As a next step in optimizing the number and distance of competitors, further studies in larger size plots and with a larger number of plots than those used in the present study are recommended.

Keywords: competition, growth, model, thinning

Procedia PDF Downloads 128
555 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics

Authors: Weikang Gong, Chunhua Li

Abstract:

Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.

Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure

Procedia PDF Downloads 121
554 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 174
553 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.

Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 257
552 Upward Spread Forced Smoldering Phenomenon: Effects and Applications

Authors: Akshita Swaminathan, Vinayak Malhotra

Abstract:

Smoldering is one of the most persistent types of combustion which can take place for very long periods (hours, days, months) if there is an abundance of fuel. It causes quite a notable number of accidents and is one of the prime suspects for fire and safety hazards. It can be ignited with weaker ignition and is more difficult to suppress than flaming combustion. Upward spread smoldering is the case in which the air flow is parallel to the direction of the smoldering front. This type of smoldering is quite uncontrollable, and hence, there is a need to study this phenomenon. As compared to flaming combustion, a smoldering phenomenon often goes unrecognised and hence is a cause for various fire accidents. A simplified experimental setup was raised to study the upward spread smoldering, its effects due to varying forced flow and its effects when it takes place in the presence of external heat sources and alternative energy sources such as acoustic energy. Linear configurations were studied depending on varying forced flow effects on upward spread smoldering. Effect of varying forced flow on upward spread smoldering was observed and studied: (i) in the presence of external heat source (ii) in the presence of external alternative energy sources (acoustic energy). The role of ash removal was observed and studied. Results indicate that upward spread forced smoldering was affected by various key controlling parameters such as the speed of the forced flow, surface orientation, interspace distance (distance between forced flow and the pilot fuel). When an external heat source was placed on either side of the pilot fuel, it was observed that the smoldering phenomenon was affected. The surface orientation and interspace distance between the external heat sources and the pilot fuel were found to play a huge role in altering the regression rate. Lastly, by impinging an alternative energy source in the form of acoustic energy on the smoldering front, it was observed that varying frequencies affected the smoldering phenomenon in different ways. The surface orientation also played an important role. This project highlights the importance of fire and safety hazard and means of better combustion for all kinds of scientific research and practical applications. The knowledge acquired from this work can be applied to various engineering systems ranging from aircrafts, spacecrafts and even to buildings fires, wildfires and help us in better understanding and hence avoiding such widespread fires. Various fire disasters have been recorded in aircrafts due to small electric short circuits which led to smoldering fires. These eventually caused the engine to catch fire that cost damage to life and property. Studying this phenomenon can help us to control, if not prevent, such disasters.

Keywords: alternative energy sources, flaming combustion, ignition, regression rate, smoldering

Procedia PDF Downloads 144
551 Examining Smallholder Farmers’ Perceptions of Climate Change and Barriers to Strategic Adaptation in Todee District, Liberia

Authors: Joe Dorbor Wuokolo

Abstract:

Thousands of smallholder farmers in Todee District, Montserrado county, are currently vulnerable to the negative impact of climate change. The district, which is the agricultural hot spot for the county, is faced with unfavorable changes in the daily temperature due to climate change. Farmers in the district have observed a dramatic change in the ratio of rainfall to sunshine, which has caused a chilling effect on their crop yields. However, there is a lack of documentation regarding how farmers perceive and respond to these changes and challenges. A study was conducted in the region to examine the perceptions of smallholder farmers regarding the negative impact of climate change, the adaptation strategies practice, and the barriers that hinder the process of advancing adaptation strategy. On purpose, a sample of 41 respondents from five towns was selected, including five town chiefs, five youth leaders, five women leaders, and sixteen community members. Women and youth leaders were specifically chosen to provide gender balance and enhance the quality of the investigation. Additionally, to validate the barriers farmers face during adaptation to climate change, this study interviewed eight experts from local and international organizations and government ministries and agencies involved in climate change and agricultural programs on what they perceived as the major barrier in both local and national level that impede farmers adaptation to climate change impact. SPSS was used to code the data, and descriptive statistics were used to analyze the data. The weighted average index (WAI) was used to rank adaptation strategies and the perceived importance of adaptation practices among farmers. On a scale from 0 to 3, 0 indicates the least important technique, and 3 indicates the most effective technique. In addition, the Problem Confrontation Index (PCI) was used to rank the barriers that prevented farmers from implementing adaptation measures. According to the findings, approximately 60% of all respondents considered the use of irrigation systems to be the most effective adaptation strategy, with drought-resistant varieties making up 30% of the total. Additionally, 80% of respondents placed a high value on drought-resistant varieties, while 63% percent placed it on irrigation practices. In addition, 78% of farmers ranked and indicated that unpredictability of the weather is the most significant barrier to their adaptation strategies, followed by the high cost of farm inputs and lack of access to financing facilities. 80% of respondents believe that the long-term changes in precipitation (rainfall) and temperature (hotness) are accelerating. This suggests that decision-makers should adopt policies and increase the capacity of smallholder farmers to adapt to the negative impact of climate change in order to ensure sustainable food production.

Keywords: adaptation strategies, climate change, farmers’ perception, smallholder farmers

Procedia PDF Downloads 82
550 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí

Abstract:

A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.

Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding

Procedia PDF Downloads 96
549 Semiconductor Properties of Natural Phosphate Application to Photodegradation of Basic Dyes in Single and Binary Systems

Authors: Y. Roumila, D. Meziani, R. Bagtache, K. Abdmeziem, M. Trari

Abstract:

Heterogeneous photocatalysis over semiconductors has proved its effectiveness in the treatment of wastewaters since it works under soft conditions. It has emerged as a promising technique, giving rise to less toxic effluents and offering the opportunity of using sunlight as a sustainable and renewable source of energy. Many compounds have been used as photocatalysts. Though synthesized ones are intensively used, they remain expensive, and their synthesis involves special conditions. We thus thought of implementing a natural material, a phosphate ore, due to its low cost and great availability. Our work is devoted to the removal of hazardous organic pollutants, which cause several environmental problems and health risks. Among them, dye pollutants occupy a large place. This work relates to the study of the photodegradation of methyl violet (MV) and rhodamine B (RhB), in single and binary systems, under UV light and sunlight irradiation. Methyl violet is a triarylmethane dye, while RhB is a heteropolyaromatic dye belonging to the Xanthene family. In the first part of this work, the natural compound was characterized using several physicochemical and photo-electrochemical (PEC) techniques: X-Ray diffraction, chemical, and thermal analyses scanning electron microscopy, UV-Vis diffuse reflectance measurements, and FTIR spectroscopy. The electrochemical and photoelectrochemical studies were performed with a Voltalab PGZ 301 potentiostat/galvanostat at room temperature. The structure of the phosphate material was well characterized. The photo-electrochemical (PEC) properties are crucial for drawing the energy band diagram, in order to suggest the formation of radicals and the reactions involved in the dyes photo-oxidation mechanism. The PEC characterization of the natural phosphate was investigated in neutral solution (Na₂SO₄, 0.5 M). The study revealed the semiconducting behavior of the phosphate rock. Indeed, the thermal evolution of the electrical conductivity was well fitted by an exponential type law, and the electrical conductivity increases with raising the temperature. The Mott–Schottky plot and current-potential J(V) curves recorded in the dark and under illumination clearly indicate n-type behavior. From the results of photocatalysis, in single solutions, the changes in MV and RhB absorbance in the function of time show that practically all of the MV was removed after 240 mn irradiation. For RhB, the complete degradation was achieved after 330 mn. This is due to its complex and resistant structure. In binary systems, it is only after 120 mn that RhB begins to be slowly removed, while about 60% of MV is already degraded. Once nearly all of the content of MV in the solution has disappeared (after about 250 mn), the remaining RhB is degraded rapidly. This behaviour is different from that observed in single solutions where both dyes are degraded since the first minutes of irradiation.

Keywords: environment, organic pollutant, phosphate ore, photodegradation

Procedia PDF Downloads 132
548 Convectory Policing-Reconciling Historic and Contemporary Models of Police Service Delivery

Authors: Mark Jackson

Abstract:

Description: This paper is based on an theoretical analysis of the efficacy of the dominant model of policing in western jurisdictions. Those results are then compared with a similar analysis of a traditional reactive model. It is found that neither model provides for optimal delivery of services. Instead optimal service can be achieved by a synchronous hybrid model, termed the Convectory Policing approach. Methodology and Findings: For over three decades problem oriented policing (PO) has been the dominant model for western police agencies. Initially based on the work of Goldstein during the 1970s the problem oriented framework has spawned endless variants and approaches, most of which embrace a problem solving rather than a reactive approach to policing. This has included the Area Policing Concept (APC) applied in many smaller jurisdictions in the USA, the Scaled Response Policing Model (SRPM) currently under trial in Western Australia and the Proactive Pre-Response Approach (PPRA) which has also seen some success. All of these, in some way or another, are largely based on a model that eschews a traditional reactive model of policing. Convectory Policing (CP) is an alternative model which challenges the underpinning assumptions which have seen proliferation of the PO approach in the last three decades and commences by questioning the economics on which PO is based. It is argued that in essence, the PO relies on an unstated, and often unrecognised assumption that resources will be available to meet demand for policing services, while at the same time maintaining the capacity to deploy staff to develop solutions to the problems which were ultimately manifested in those same calls for service. The CP model relies on the observations from a numerous western jurisdictions to challenge the validity of that underpinning assumption, particularly in fiscally tight environment. In deploying staff to pursue and develop solutions to underpinning problems, there is clearly an opportunity cost. Those same staff cannot be allocated to alternative duties while engaged in a problem solution role. At the same time, resources in use responding to calls for service are unavailable, while committed to that role, to pursue solutions to the problems giving rise to those same calls for service. The two approaches, reactive and PO are therefore dichotomous. One cannot be optimised while the other is being pursued. Convectory Policing is a pragmatic response to the schism between the competing traditional and contemporary models. If it is not possible to serve either model with any real rigour, it becomes necessary to taper an approach to deliver specific outcomes against which success or otherwise might be measured. CP proposes that a structured roster-driven approach to calls for service, combined with the application of what is termed a resource-effect response capacity has the potential to resolve the inherent conflict between traditional and models of policing and the expectations of the community in terms of community policing based problem solving models.

Keywords: policing, reactive, proactive, models, efficacy

Procedia PDF Downloads 483
547 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 418
546 Financial Burden of Occupational Slip and Fall Incidences in Taiwan

Authors: Kai Way Li, Lang Gan

Abstract:

Slip &Fall are common in Taiwan. They could result in injuries and even fatalities. Official statistics indicate that more than 15% of all occupational incidences were slip/fall related. All the workers in Taiwan are required by the law to join the worker’s insurance program administered by the Bureau of Labor Insurance (BLI). The BLI is a government agency under the supervision of the Ministry of Labor. Workers claim with the BLI for insurance compensations when they suffer fatalities or injuries at work. Injuries statistics based on worker’s compensation claims were rarely studied. The objective of this study was to quantify the injury statistics and financial cost due to slip-fall incidences based on the BLI compensation records. Compensation records in the BLI during 2007 to 2013 were retrieved. All the original application forms, approval opinions, results for worker’s compensations were in hardcopy and were stored in the BLI warehouses. Xerox copies of the claims, excluding the personal information of the applicants (or the victim if passed away), were obtained. The content in the filing forms were coded in an Excel worksheet for further analyses. Descriptive statistics were performed to analyze the data. There were a total of 35,024 claims including 82 deaths, 878 disabilities, and 34,064 injuries/illnesses which were slip/fall related. It was found that the average losses for the death cases were 40 months. The total dollar amount for these cases paid was 86,913,195 NTD. For the disability cases, the average losses were 367.36 days. The total dollar amount for these cases paid was almost 2.6 times of those for the death cases (233,324,004 NTD). For the injury/illness cases, the average losses for the illness cases were 58.78 days. The total dollar amount for these cases paid was approximately 13 times of those of the death cases (1134,850,821 NTD). For the applicants/victims, 52.3% were males. There were more males than females for the deaths, disability, and injury/illness cases. Most (57.8%) of the female victims were between 45 to 59 years old. Most of the male victims (62.6%) were, on the other hand, between 25 to 39 years old. Most of the victims were in manufacturing industry (26.41%), next the construction industry (22.20%), and next the retail industry (13.69%). For the fatality cases, head injury was the main problem for immediate or eventual death (74.4%). For the disability case, foot (17.46%) and knee (9.05%) injuries were the leading problems. The compensation claims other than fatality and disability were mainly associated with injuries of the foot (18%), hand (12.87%), knee (10.42%), back (8.83%), and shoulder (6.77%). The slip/fall cases studied indicate that the ratios among the death, disability, and injury/illness counts were 1:10:415. The ratios of dollar amount paid by the BLI for the three categories were 1:2.6:13. Such results indicate the significance of slip-fall incidences resulting in different severity. Such information should be incorporated in to slip-fall prevention program in industry.

Keywords: epidemiology, slip and fall, social burden, workers’ compensation

Procedia PDF Downloads 323
545 Food Security in Germany: Inclusion of the Private Sector through Law Reform Faces Challenges

Authors: Agnetha Schuchardt, Jennifer Hartmann, Laura Schulte, Roman Peperhove, Lars Gerhold

Abstract:

If critical infrastructures fail, even for a short period of time, it can have significant negative consequences for the affected population. This is especially true for the food sector that is strongly interlinked with other sectors like the power supply. A blackout could lead to several cities being without food supply for numerous days, simply because cash register systems do no longer work properly. Following the public opinion, securing the food supply in emergencies is considered a task of the state, however, in the German context, the key players are private enterprises and private households. Both are not aware of their responsibility and both cannot be forced to take any preventive measures prior to an emergency. This problem became evident to officials and politicians so that the law covering food security was revised in order to include private stakeholders into mitigation processes. The paper will present a scientific review of governmental and regulatory literature. The focus is the inclusion of the food industry through a law reform and the challenges that still exist. Together with legal experts, an analysis of regulations will be presented that explains the development of the law reform concerning food security and emergency storage in Germany. The main findings are that the existing public food emergency storage is out-dated, insufficient and too expensive. The state is required to protect food as a critical infrastructure but does not have the capacities to live up to this role. Through a law reform in 2017, new structures should to established. The innovation was to include the private sector into the civil defense concept since it has the required knowledge and experience. But the food industry is still reluctant. Preventive measures do not serve economic purposes – on the contrary, they cost money. The paper will discuss respective examples like equipping supermarkets with emergency power supply or self-sufficient cash register systems and why the state is not willing to cover the costs of these measures, but neither is the economy. The biggest problem with the new law is that private enterprises can only be forced to support food security if the state of emergency has occurred already and not one minute earlier. The paper will cover two main results: the literature review and an expert workshop that will be conducted in summer 2018 with stakeholders from different parts of the food supply chain as well as officials of the public food emergency concept. The results from this participative process will be presented and recommendations will be offered that show how the private economy could be better included into a modern food emergency concept (e. g. tax reductions for stockpiling).

Keywords: critical infrastructure, disaster control, emergency food storage, food security, private economy, resilience

Procedia PDF Downloads 186
544 Eggshell Waste Bioprocessing for Sustainable Acid Phosphatase Production and Minimizing Environmental Hazards

Authors: Soad Abubakr Abdelgalil, Gaber Attia Abo-Zaid, Mohamed Mohamed Yousri Kaddah

Abstract:

Background: The Environmental Protection Agency has listed eggshell waste as the 15th most significant food industry pollution hazard. The utilization of eggshell waste as a source of renewable energy has been a hot topic in recent years. Therefore, finding a sustainable solution for the recycling and valorization of eggshell waste by investigating its potential to produce acid phosphatase (ACP) and organic acids by the newly-discovered B. sonorensis was the target of the current investigation. Results: The most potent ACP-producing B. sonorensis strain ACP2 was identified as a local bacterial strain obtained from the effluent of paper and pulp industries on basis of molecular and morphological characterization. The use of consecutive statistical experimental approaches of Plackett-Burman Design (PBD), and Orthogonal Central Composite Design (OCCD), followed by pH-uncontrolled cultivation conditions in a 7 L bench-top bioreactor, revealed an innovative medium formulation that substantially improved ACP production, reaching 216 U L⁻¹ with ACP yield coefficient Yp/x of 18.2 and a specific growth rate (µ) of 0.1 h⁻¹. The metals Ag+, Sn+, and Cr+ were the most efficiently released from eggshells during the solubilization process by B. sonorensis. The uncontrolled pH culture condition is the most suited and favored setting for improving the ACP and organic acids production simultaneously. Quantitative and qualitative analyses of produced organic acids were carried out using liquid chromatography-tandem mass spectrometry (LC-MS/MS). Lactic acid, citric acid, and hydroxybenzoic acid isomer were the most common organic acids produced throughout the cultivation process. The findings of thermogravimetric analysis (TGA), differential scan calorimeter (DSC), scanning electron microscope (SEM), energy-dispersive spectroscopy (EDS), Fourier-Transform Infrared Spectroscopy (FTIR), and X-Ray Diffraction (XRD) analysis emphasize the significant influence of organic acids and ACP activity on the solubilization of eggshells particles. Conclusions: This study emphasized robust microbial engineering approaches for the large-scale production of a newly discovered acid phosphatase accompanied by organic acids production from B. sonorensis. The biovalorization of the eggshell waste and the production of cost-effective ACP and organic acids were integrated into the current study, and this was done through the implementation of a unique and innovative medium formulation design for eggshell waste management, as well as scaling up ACP production on a bench-top scale.

Keywords: chicken eggshells waste, bioremediation, statistical experimental design, batch fermentation

Procedia PDF Downloads 376
543 Ecological Engineering Through Organic Amendments: Enhancing Pest Regulation, Beneficial Insect Populations, and Rhizosphere Microbial Diversity in Cabbage Ecosystems

Authors: Ravi Prakash Maurya, Munaswamyreddygari Sreedhar

Abstract:

The present studies on ecological engineering through soil amendments in cabbage crops for insect pests regulation were conducted at G. B. Pant University of Agriculture and Technology, Pantnagar, Udham Singh Nagar, Uttarakhand, India. Ten treatments viz., Farm Yard Manure (FYM), Neem cake (NC), Vermicompost (VC), Poultry manure (PM), PM+FYM, NC+VC, NC+PM, VC+FYM, Urea+ SSP+MOP (Standard Check) and Untreated Check were evaluated to study the effect of these amendments on the population of insect pests, natural enemies and the microbial community of the rhizosphere in the cabbage crop ecosystem. The results revealed that most of the cabbage pests, viz., aphids, head borer, gram pod borer, and armyworm, were more prevalent in FYM, followed by PM and NC-treated plots. The best cost-benefit ratio was found in PM + FYM treatment, which was 1: 3.62, while the lowest, 1: 0.97, was found in the VC plot. The population of natural enemies like spiders, coccinellids, syrphids, and other hymenopterans and dipterans was also found to be prominent in organic plots, namely FYM, followed by VC and PM plots. Diversity studies on organic manure-treated plots were also carried out, which revealed a total of nine insect orders (Hymenoptera, Hemiptera, Lepidoptera, Coleoptera, Neuroptera, Diptera, Orthoptera, Dermaptera, Thysanoptera, and one arthropodan class, Arachnida) in different treatments. The Simpson Diversity Index was also studied and found to be maximum in FYM plots. The metagenomic analysis of the rhizosphere microbial community revealed that the highest bacterial count was found in NC+PM plot as compared to standard check and untreated check. The diverse microbial population contributes to soil aggregation and stability. Healthier soil structures can improve water retention, aeration, and root penetration, which are all crucial for crop health. The further analysis also identified a total of 39 bacterial phyla, among which the most abundant were Actinobacteria, Firmicutes, and the SAR324 clade. Actinobacteria and Firmicutes are known for their roles in decomposing organic matter and mineralizing nutrients. Their highest abundance suggests improved nutrient cycling and availability, which can directly enhance plant growth. Hence, organic amendments in cabbage farming can transform the rhizosphere microbiome, reduce pest pressure, and foster populations of beneficial insects, leading to healthier crops and a more sustainable agricultural ecosystem.

Keywords: cabbage ecosystem, organic amendments, rhizosphere microbiome, pest and natural enemy diversity

Procedia PDF Downloads 13
542 Reagentless Detection of Urea Based on ZnO-CuO Composite Thin Film

Authors: Neha Batra Bali, Monika Tomar, Vinay Gupta

Abstract:

A reagentless biosensor for detection of urea based on ZnO-CuO composite thin film is presented in following work. Biosensors have immense potential for varied applications ranging from environmental to clinical testing, health care, and cell analysis. Immense growth in the field of biosensors is due to the huge requirement in today’s world to develop techniques which are both cost effective and accurate for prevention of disease manifestation. The human body comprises of numerous biomolecules which in their optimum levels are essential for functioning. However mismanaged levels of these biomolecules result in major health issues. Urea is one of the key biomolecules of interest. Its estimation is of paramount significance not only for healthcare sector but also from environmental perspectives. If level of urea in human blood/serum is abnormal, i.e., above or below physiological range (15-40mg/dl)), it may lead to diseases like renal failure, hepatic failure, nephritic syndrome, cachexia, urinary tract obstruction, dehydration, shock, burns and gastrointestinal, etc. Various metal nanoparticles, conducting polymer, metal oxide thin films, etc. have been exploited to act as matrix to immobilize urease to fabricate urea biosensor. Amongst them, Zinc Oxide (ZnO), a semiconductor metal oxide with a wide band gap is of immense interest as an efficient matrix in biosensors by virtue of its natural abundance, biocompatibility, good electron communication feature and high isoelectric point (9.5). In spite of being such an attractive candidate, ZnO does not possess a redox couple of its own which necessitates the use of electroactive mediators for electron transfer between the enzyme and the electrode, thereby causing hindrance in realization of integrated and implantable biosensor. In the present work, an effort has been made to fabricate a matrix based on ZnO-CuO composite prepared by pulsed laser deposition (PLD) technique in order to incorporate redox properties in ZnO matrix and to utilize the same for reagentless biosensing applications. The prepared bioelectrode Urs/(ZnO-CuO)/ITO/glass exhibits high sensitivity (70µAmM⁻¹cm⁻²) for detection of urea (5-200 mg/dl) with high stability (shelf life ˃ 10 weeks) and good selectivity (interference ˂ 4%). The enhanced sensing response obtained for composite matrix is attributed to the efficient electron exchange between ZnO-CuO matrix and immobilized enzymes, and subsequently fast transfer of generated electrons to the electrode via matrix. The response is encouraging for fabricating reagentless urea biosensor based on ZnO-CuO matrix.

Keywords: biosensor, reagentless, urea, ZnO-CuO composite

Procedia PDF Downloads 290