Search results for: cost of energy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12940

Search results for: cost of energy

1000 Review of Life-Cycle Analysis Applications on Sustainable Building and Construction Sector as Decision Support Tools

Authors: Liying Li, Han Guo

Abstract:

Considering the environmental issues generated by the building sector for its energy consumption, solid waste generation, water use, land use, and global greenhouse gas (GHG) emissions, this review pointed out to LCA as a decision-support tool to substantially improve the sustainability in the building and construction industry. The comprehensiveness and simplicity of LCA make it one of the most promising decision support tools for the sustainable design and construction of future buildings. This paper contains a comprehensive review of existing studies related to LCAs with a focus on their advantages and limitations when applied in the building sector. The aim of this paper is to enhance the understanding of a building life-cycle analysis, thus promoting its application for effective, sustainable building design and construction in the future. Comparisons and discussions are carried out between four categories of LCA methods: building material and component combinations (BMCC) vs. the whole process of construction (WPC) LCA,attributional vs. consequential LCA, process-based LCA vs. input-output (I-O) LCA, traditional vs. hybrid LCA. Classical case studies are presented, which illustrate the effectiveness of LCA as a tool to support the decisions of practitioners in the design and construction of sustainable buildings. (i) BMCC and WPC categories of LCA researches tend to overlap with each other, as majority WPC LCAs are actually developed based on a bottom-up approach BMCC LCAs use. (ii) When considering the influence of social and economic factors outside the proposed system by research, a consequential LCA could provide a more reliable result than an attributional LCA. (iii) I-O LCA is complementary to process-based LCA in order to address the social and economic problems generated by building projects. (iv) Hybrid LCA provides a more superior dynamic perspective than a traditional LCA that is criticized for its static view of the changing processes within the building’s life cycle. LCAs are still being developed to overcome their limitations and data shortage (especially data on the developing world), and the unification of LCA methods and data can make the results of building LCA more comparable and consistent across different studies or even countries.

Keywords: decision support tool, life-cycle analysis, LCA tools and data, sustainable building design

Procedia PDF Downloads 105
999 Data Science/Artificial Intelligence: A Possible Panacea for Refugee Crisis

Authors: Avi Shrivastava

Abstract:

In 2021, two heart-wrenching scenes, shown live on television screens across countries, painted a grim picture of refugees. One of them was of people clinging onto an airplane's wings in their desperate attempt to flee war-torn Afghanistan. They ultimately fell to their death. The other scene was the U.S. government authorities separating children from their parents or guardians to deter migrants/refugees from coming to the U.S. These events show the desperation refugees feel when they are trying to leave their homes in disaster zones. However, data paints a grave picture of the current refugee situation. It also indicates that a bleak future lies ahead for the refugees across the globe. Data and information are the two threads that intertwine to weave the shimmery fabric of modern society. Data and information are often used interchangeably, but they differ considerably. For example, information analysis reveals rationale, and logic, while data analysis, on the other hand, reveals a pattern. Moreover, patterns revealed by data can enable us to create the necessary tools to combat huge problems on our hands. Data analysis paints a clear picture so that the decision-making process becomes simple. Geopolitical and economic data can be used to predict future refugee hotspots. Accurately predicting the next refugee hotspots will allow governments and relief agencies to prepare better for future refugee crises. The refugee crisis does not have binary answers. Given the emotionally wrenching nature of the ground realities, experts often shy away from realistically stating things as they are. This hesitancy can cost lives. When decisions are based solely on data, emotions can be removed from the decision-making process. Data also presents irrefutable evidence and tells whether there is a solution or not. Moreover, it also responds to a nonbinary crisis with a binary answer. Because of all that, it becomes easier to tackle a problem. Data science and A.I. can predict future refugee crises. With the recent explosion of data due to the rise of social media platforms, data and insight into data has solved many social and political problems. Data science can also help solve many issues refugees face while staying in refugee camps or adopted countries. This paper looks into various ways data science can help solve refugee problems. A.I.-based chatbots can help refugees seek legal help to find asylum in the country they want to settle in. These chatbots can help them find a marketplace where they can find help from the people willing to help. Data science and technology can also help solve refugees' many problems, including food, shelter, employment, security, and assimilation. The refugee problem seems to be one of the most challenging for social and political reasons. Data science and machine learning can help prevent the refugee crisis and solve or alleviate some of the problems that refugees face in their journey to a better life. With the explosion of data in the last decade, data science has made it possible to solve many geopolitical and social issues.

Keywords: refugee crisis, artificial intelligence, data science, refugee camps, Afghanistan, Ukraine

Procedia PDF Downloads 62
998 Peripheral Neuropathy after Locoregional Anesthesia

Authors: Dalila Chaid, Bennameur Fedilli, Mohammed Amine Bellelou

Abstract:

The study focuses on the experience of lower-limb amputees, who face both physical and psychological challenges due to their disability. Chronic neuropathic pain and various types of limb pain are common in these patients. They often require orthopaedic interventions for issues such as dressings, infection, ulceration, and bone-related problems. Research Aim: The aim of this study is to determine the most suitable anaesthetic technique for lower-limb amputees, which can provide them with the greatest comfort and prolonged analgesia. The study also aims to demonstrate the effectiveness and cost-effectiveness of ultrasound-guided local regional anaesthesia (LRA) in this patient population. Methodology: The study is an observational analytical study conducted over a period of eight years, from 2010 to 2018. It includes a total of 955 cases of revisions performed on lower limb stumps. The parameters analyzed in this study include the effectiveness of the block and the use of sedation, the duration of the block, the post-operative visual analog scale (VAS) scores, and patient comfort. Findings: The study findings highlight the benefits of ultrasound-guided LRA in providing comfort by optimizing post-operative analgesia, which can contribute to psychological and bodily repair in lower-limb amputees. Additionally, the study emphasizes the use of alpha2 agonist adjuvants with sedative and analgesic properties, long-acting local anaesthetics, and larger volumes for better outcomes. Theoretical Importance: This study contributes to the existing knowledge by emphasizing the importance of choosing an appropriate anaesthetic technique for lower-limb amputees. It highlights the potential of ultrasound-guided LRA and the use of specific adjuvants and local anaesthetics in improving post-operative analgesia and overall patient outcomes. Data Collection and Analysis Procedures: Data for this study were collected through the analysis of medical records and relevant documentation related to the 955 cases included in the study. The effectiveness of the anaesthetic technique, duration of the block, post-operative pain scores, and patient comfort were analyzed using statistical methods. Question Addressed: The study addresses the question of which anaesthetic technique would be most suitable for lower-limb amputees to provide them with optimal comfort and prolonged analgesia. Conclusion: The study concludes that ultrasound-guided LRA, along with the use of alpha2 agonist adjuvants, long-acting local anaesthetics, and larger volumes, can be an effective approach in providing comfort and improving post-operative analgesia for lower-limb amputees. This technique can potentially contribute to the psychological and bodily repair of these patients. The findings of this study have implications for clinical practice in the management of lower-limb amputees, highlighting the importance of personalized anaesthetic approaches for better outcomes.

Keywords: neuropathic pain, ultrasound-guided peripheral nerve block, DN4 quiz, EMG

Procedia PDF Downloads 56
997 An Empirical Study of Performance Management System: Implementation of Performance Management Cycle to Achieve High-Performance Culture at Pertamina Company, Indonesia

Authors: Arif Budiman

Abstract:

Any organization or company that wishes to achieve vision, mission, and goals of the organization is required to implement a performance management system or known as the Performance Management System (PMS) in every part of the whole organization. PMS is a tool to help visualize the direction and work program of the organization to achieve the goal. The challenge is PMS should not stop merely as a visualization tool to achieve the vision and mission of the organization, but PMS should also be able to create a high-performance culture that is inherent in each individual of the organization. Establishment of a culture within an organization requires the support of top leaders and also requires a system or governance that encourages every individual in the organization to be involved in any work program of the organization. Keywords of creating a high-performance culture are the formation of communication pattern involving the whole individual, either vertically or horizontally, and performed consistently and persistently by all individuals in each line of the organization. PT Pertamina (Persero) as the state-owned national energy company holds a system to internalize the culture of high performance through a system called Performance Management System Cycle (PMS Cycle). This system has 7 stages of the cycle, those are: (1) defining vision, mission and strategic plan of the company, (2) defining key performance indicator of each line and the individual (‘expectation setting conversation’), (3) defining performance target and performance agreement, (4) monitoring performance on a monthly regular basis (‘pulse check’), (5) implementing performance dialogue between leaders and staffs periodically every 3 months (‘performance dialogue’), (6) defining rewards and consequences based on the achievement of the performance of each line and the individual, and (7) calculating the final performance value achieved by each line and individual from one period of the current year. Perform PMS is a continual communication running throughout the year, that is why any three performance discussion that should be performed, include expectation setting conversations, pulse check and performance dialogue. In addition, another significant point and necessary undertaken to complete the assessment of individual performance assessment is soft competencies through 360-degree assessment by leaders, staffs, and peers.

Keywords: 360-degree assessment, expectation setting conversation, performance management system cycle, performance dialogue, pulse check

Procedia PDF Downloads 430
996 Experimental Investigation of the Out-of-Plane Dynamic Behavior of Adhesively Bonded Composite Joints at High Strain Rates

Authors: Sonia Sassi, Mostapha Tarfaoui, Hamza Ben Yahia

Abstract:

In this investigation, an experimental technique in which the dynamic response, damage kinetic and heat dissipation are measured simultaneously during high strain rates on adhesively bonded joints materials. The material used in this study is widely used in the design of structures for military applications. It was composed of a 45° Bi-axial fiber-glass mat of 0.286 mm thickness in a Polyester resin matrix. In adhesive bonding, a NORPOL Polyvinylester of 1 mm thickness was used to assemble the composite substrate. The experimental setup consists of a compression Split Hopkinson Pressure Bar (SHPB), a high-speed infrared camera and a high-speed Fastcam rapid camera. For the dynamic compression tests, 13 mm x 13 mm x 9 mm samples for out-of-plane tests were considered from 372 to 1030 s-1. Specimen surface is controlled and monitored in situ and in real time using the high-speed camera which acquires the damage progressive in specimens and with the infrared camera which provides thermal images in time sequence. Preliminary compressive stress-strain vs. strain rates data obtained show that the dynamic material strength increases with increasing strain rates. Damage investigations have revealed that the failure mainly occurred in the adhesive/adherent interface because of the brittle nature of the polymeric adhesive. Results have shown the dependency of the dynamic parameters on strain rates. Significant temperature rise was observed in dynamic compression tests. Experimental results show that the temperature change depending on the strain rate and the damage mode and their maximum exceed 100 °C. The dependence of these results on strain rate indicates that there exists a strong correlation between damage rate sensitivity and heat dissipation, which might be useful when developing damage models under dynamic loading tacking into account the effect of the energy balance of adhesively bonded joints.

Keywords: adhesive bonded joints, Hopkinson bars, out-of-plane tests, dynamic compression properties, damage mechanisms, heat dissipation

Procedia PDF Downloads 204
995 Phenolic Composition of Wines from Cultivar Carménère during Aging with Inserts to Barrels

Authors: E. Obreque-Slier, P. Osorio-Umaña, G. Vidal-Acevedo, A. Peña-Neira, M. Medel-Marabolí

Abstract:

Sensory and nutraceutical characteristics of a wine are determined by different chemical compounds, such as organic acids, sugars, alcohols, polysaccharides, aromas, and polyphenols. The polyphenols correspond to secondary metabolites that are associated with the prevention of several pathologies, and those are responsible for color, aroma, bitterness, and astringency in wines. These compounds come from grapes and wood during aging in barrels, which correspond to the format of wood most widely used in wine production. However, the barrels is a high-cost input with a limited useful life (3-4 years). For this reason, some oenological products have been developed in order to renew the barrels and increase their useful life in some years. These formats are being used slowly because limited information exists about the effect on the wine chemical characteristics. The objective of the study was to evaluate the effect of different laubarrel renewal systems (staves and zigzag) on the polyphenolic characteristics of a Carménère wine (Vitis vinifera), an emblematic cultivar of Chile. For this, a completely randomized experimental design with 5 treatments and three replicates per treatment was used. The treatments were: new barrels (T0), used barrels during 4 years (T1), scraped used barrels (T2), used barrels with staves (T3) and used barrels with zigzag (T4). The study was performed for 12 months, and different spectrophotometric parameters (phenols, anthocyanins, and total tannins) and HPLC-DAD (low molecular weight phenols) were evaluated. The wood inputs were donated by Toneleria Nacional and corresponded to products from the same production batch. The total phenols content increased significantly after 40 days, while the total tannin concentration decreased gradually during the study. The anthocyanin concentration increased after 120 days of the assay in all treatments. Comparatively, it was observed that the wine of T2 presented the lowest values of these polyphenols, while the T0 and T4 presented the highest total phenol contents. Also, T1 presented the highest values of total tannins in relation to the rest of the treatments in some samples. The low molecular weight phenolic compounds identified by HPLC-DAD were 7 flavonoids (epigallocatechin, catechin, procyanidin gallate, epicatechin, quercetin, rutin and myricetin) and 14 non-flavonoids (gallic, protocatechuic, hydroxybenzoic, trans-cutaric, vanillinic, caffeic, syringic, p-coumaric and ellagic acids; tyrosol, vanillin, syringaldehyde, trans-resveratrol and cis-resveratrol). Tyrosol was the most abundant compound, whereas ellagic acid was the lowest in the samples. Comparatively, it was observed that the wines of T2 showed the lowest concentrations of flavonoid and non-flavonoid phenols during the study. In contrast, wines of T1, T3, and T4 presented the highest contents of non-flavonoid polyphenols. In summary, the use of barrel renovators (zig zag and staves) is an interesting alternative which would emulate the contribution of polyphenols from the barrels to the wine.

Keywords: barrels, oak wood aging, polyphenols, red wine

Procedia PDF Downloads 183
994 The Lived Experiences and Coping Strategies of Women with Attention Deficit and Hyperactivity Disorder (ADHD)

Authors: Oli Sophie Meredith, Jacquelyn Osborne, Sarah Verdon, Jane Frawley

Abstract:

PROJECT OVERVIEW AND BACKGROUND: Over one million Australians are affected by ADHD at an economic and social cost of over $20 billion per annum. Despite health outcomes being significantly worse compared with men, women have historically been overlooked in ADHD diagnosis and treatment. While research suggests physical activity and other non-prescription options can help with ADHD symptoms, the frontline response to ADHD remains expensive stimulant medications that can have adverse side effects. By interviewing women with ADHD, this research will examine women’s self-directed approaches to managing symptoms, including alternatives to prescription medications. It will investigate barriers and affordances to potentially helpful approaches and identify any concerning strategies pursued in lieu of diagnosis. SIGNIFICANCE AND INNOVATION: Despite the economic and societal impact of ADHD on women, research investigating how women manage their symptoms is scant. This project is significant because although women’s ADHD symptoms are markedly different to those of men, mainstream treatment has been based on the experiences of men. Further, it is thought that in developing nuanced coping strategies, women may have masked their symptoms. Thus, this project will highlight strategies which women deem effective in ‘thriving’ rather than just ‘hiding’. By investigating the health service use, self-care and physical activity of women with ADHD, this research aligns with a priority research areas as identified by the November 2023 senate ADHD inquiry report. APPROACH AND METHODS: Semi-structured interviews will be conducted with up to 20 women with ADHD. Interviews will be conducted in person and online to capture experience across rural and metropolitan Australia. Participants will be recruited in partnership with the peak representative body, ADHD Australia. The research will use an intersectional framework, and data will be analysed thematically. This project is led by an interdisciplinary and cross-institutional team of women with ADHD. Reflexive interviewing skills will be employed to help interviewees feel more comfortable disclosing their experiences, especially where they share common ground ENGAGEMENT, IMPACT AND BENEFIT: This research will benefit women with ADHD by increasing knowledge of strategies and alternative treatments to prescription medications, reducing the social and economic burden of ADHD on Australia and on individuals. It will also benefit women by identifying risks involved with some self-directed approaches in lieu of medical advice. The project has an accessible impact plan to directly benefit end-users, which includes the development of a podcast and a PDF resource translating findings. The resources will reach a wide audience through ADHD Australia’s extensive national networks. We will collaborate with Charles Sturt’s Accessibility and Inclusion Division of Safety, Security and Well-being to create a targeted resource for students with ADHD.

Keywords: ADHD, women's health, self-directed strategies, health service use, physical activity, public health

Procedia PDF Downloads 61
993 The Impact of Economic Status on Health Status in the Context of Bangladesh

Authors: Md. S. Sabuz

Abstract:

Bangladesh, a South Asian developing country, has achieved a remarkable breakthrough in health indicators during the last four decades despite immense income inequality. This phenomenon results in the mystical exclusion of marginalized people from obtaining health care facilities. However, the persistence of exclusion of the disadvantaged remains troubling. Exclusion occurs from occupational inferiority, pay and wage differences, educational backwardness, gender disparity to urban-rural complexity and eliminate the unprivileged from seeking and availing the health services. Evidence from Bangladesh shows that many sick people prefer to die at home without securing medical services because in previous times they were not treated well, not because the medical facilities were inadequate or antediluvian but the socio-economic class allows them to receive obdurate treatment. Furthermore, government and policymakers have given enormous emphasis on infrastructural development and achieving health indicators instead of ensuring quality services and inclusiveness of people from all spheres. Therefore, it is high time to address the issues concerning this and highlight the impact of economic status on health status in a sociological perspective. The objective of this study is to consider ways of assessing and exploring the impact of economic status for instance: occupational status, pay and wage variable, on health status in the context of Bangladesh. The hypotheses are that there are a significant number of factors affecting economic status which are impactful for health status eventually, but acute income inequality is a prominent factor. Illiteracy, gender disparity, remoteness, incredibility on services, superior costs, superstition etc. are the dominant indicators behind the economic factors influencing the health status. The chosen methodologies are a qualitative and quantitative approaches to accomplish the research objectives. Secondary sources of data will be used to conduct the study. Surveys will be conducted on the people who have ever been through the health care facilities and people from the different socio-economic and cultural backgrounds. Focus group discussions will be conducted to acquire the data from different cultural and regional citizens. The findings show that 48% of people who are from disadvantaged communities have been deprived of proper health care facilities. The general reasons behind this are the higher cost of medicines and other equipment. A significant number of people are unaware of the appropriate facilities. It was found that the socio-economic variables are the main influential factors that work as the driving force for both economic dimension and health status. Above all regional variables and gender, dimensions have an enormous effect on determining the health status of an individual or community. Amidst many positive achievements for example decrease in the child mortality rate, an increase in the immunization programs of the child etc., the inclusiveness of all classes of people in health care facilities has been overshadowed in Bangladesh. However, this phenomenon along with the socio-economic and cultural phenomena significantly demolishes the quality and inclusiveness of the health status of people.

Keywords: cultural context of health, economic status, gender and health, rural health care

Procedia PDF Downloads 199
992 Ficus Microcarpa Fruit Derived Iron Oxide Nanomaterials and Its Anti-bacterial, Antioxidant and Anticancer Efficacy

Authors: Fuad Abdullah Alatawi

Abstract:

Microbial infections-based diseases are a significant public health issue around the world, mainly when antibiotic-resistant bacterium types evolve. In this research, we explored the anti-bacterial and anti-cancer potency of iron-oxide (Fe₂O₃) nanoparticles prepared from F. macrocarpa fruit extract. The chemical composition of F. macrocarpa fruit extract was used as a reducing and capping agent for nanoparticles’ synthesis was examined by GC-MS/MS analysis. Then, the prepared nanoparticles were confirmed by various biophysical techniques, including X-ray powder diffraction (XRD), Fourier-transform infrared spectroscopy (FTIR), UV-Vis Spectroscopy, and Transmission Electron Microscopy (TEM) and Energy Dispersive Spectroscopy (EDAX), and Dynamic Light Scattering (DLS). Also, the antioxidant capacity of fruit extract was determined through 2,2-diphenyl-1-picrylhydrazyl (DPPH), 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid (ABTS), Fluorescence Recovery After Photobleaching (FRAP), Superoxide Dismutase (SOD) assays. Furthermore, the cytotoxicity activities of Fe₂O₃ NPs were determined using the (3-(4, 5-dimethylthiazolyl-2)-2, 5-diphenyltetrazolium bromide) (MTT) test on MCF-7 cells. In the antibacterial assay, lethal doses of the Fe₂O₃NPs effectively inhibited the growth of gram-negative and gram-positive bacteria. The surface damage, ROS production, and protein leakage are the antibacterial mechanisms of Fe₂O₃NPs. Concerning antioxidant activity, the fruit extracts of F. macrocarpa had strong antioxidant properties, which were confirmed by DPPH, ABTS, FRAP, and SOD assays. In addition, the F. microcarpa-derived iron oxide nanomaterials greatly reduced the cell viability of (MCF-7). The GC-MS/MS analysis revealed the presence of 25 main bioactive compounds in the F. microcarpa extract. Overall, the finding of this research revealed that F. microcarpa-derived Fe₂O₃ nanoparticles could be employed as an alternative therapeutic agent to cure microbial infection and breast cancer in humans.

Keywords: ficus microcarpa, iron oxide, antibacterial activity, cytotoxicity

Procedia PDF Downloads 106
991 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine

Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.

Keywords: diesel fuel, CFD, evaporation, multiphase

Procedia PDF Downloads 330
990 Weakly Non-Linear Stability Analysis of Newtonian Liquids and Nanoliquids in Shallow, Square and Tall High-Porosity Enclosures

Authors: Pradeep G. Siddheshwar, K. M. Lakshmi

Abstract:

The present study deals with weakly non-linear stability analysis of Rayleigh-Benard-Brinkman convection in nanoliquid-saturated porous enclosures. The modified-Buongiorno-Brinkman model (MBBM) is used for the conservation of linear momentum in a nanoliquid-saturated-porous medium under the assumption of Boussinesq approximation. Thermal equilibrium is imposed between the base liquid and the nanoparticles. The thermophysical properties of nanoliquid are modeled using phenomenological laws and mixture theory. The fifth-order Lorenz model is derived for the problem and is then reduced to the first-order Ginzburg-Landau equation (GLE) using the multi-scale method. The analytical solution of the GLE for the amplitude is then used to quantify the heat transport in closed form, in terms of the Nusselt number. It is found that addition of dilute concentration of nanoparticles significantly enhances the heat transport and the dominant reason for the same is the high thermal conductivity of the nanoliquid in comparison to that of the base liquid. This aspect of nanoliquids helps in speedy removal of heat. The porous medium serves the purpose of retainment of energy in the system due to its low thermal conductivity. The present model helps in making a unified study for obtaining the results for base liquid, nanoliquid, base liquid-saturated porous medium and nanoliquid-saturated porous medium. Three different types of enclosures are considered for the study by taking different values of aspect ratio, and it is observed that heat transport in tall porous enclosure is maximum while that of shallow is the least. Detailed discussion is also made on estimating heat transport for different volume fractions of nanoparticles. Results of single-phase model are shown to be a limiting case of the present study. The study is made for three boundary combinations, viz., free-free, rigid-rigid and rigid-free.

Keywords: Boungiorno model, Ginzburg-Landau equation, Lorenz equations, porous medium

Procedia PDF Downloads 312
989 Microwave-Assisted Alginate Extraction from Portuguese Saccorhiza polyschides – Influence of Acid Pretreatment

Authors: Mário Silva, Filipa Gomes, Filipa Oliveira, Simone Morais, Cristina Delerue-Matos

Abstract:

Brown seaweeds are abundant in Portuguese coastline and represent an almost unexploited marine economic resource. One of the most common species, easily available for harvesting in the northwest coast, is Saccorhiza polyschides grows in the lowest shore and costal rocky reefs. It is almost exclusively used by local farmers as natural fertilizer, but contains a substantial amount of valuable compounds, particularly alginates, natural biopolymers of high interest for many industrial applications. Alginates are natural polysaccharides present in cell walls of brown seaweed, highly biocompatible, with particular properties that make them of high interest for the food, biotechnology, cosmetics and pharmaceutical industries. Conventional extraction processes are based on thermal treatment. They are lengthy and consume high amounts of energy and solvents. In recent years, microwave-assisted extraction (MAE) has shown enormous potential to overcome major drawbacks that outcome from conventional plant material extraction (thermal and/or solvent based) techniques, being also successfully applied to the extraction of agar, fucoidans and alginates. In the present study, acid pretreatment of brown seaweed Saccorhiza polyschides for subsequent microwave-assisted extraction (MAE) of alginate was optimized. Seaweeds were collected in Northwest Portuguese coastal waters of the Atlantic Ocean between May and August, 2014. Experimental design was used to assess the effect of temperature and acid pretreatment time in alginate extraction. Response surface methodology allowed the determination of the optimum MAE conditions: 40 mL of HCl 0.1 M per g of dried seaweed with constant stirring at 20ºC during 14h. Optimal acid pretreatment conditions have enhanced significantly MAE of alginates from Saccorhiza polyschides, thus contributing for the development of a viable, more environmental friendly alternative to conventional processes.

Keywords: acid pretreatment, alginate, brown seaweed, microwave-assisted extraction, response surface methodology

Procedia PDF Downloads 362
988 Mitigation of Lithium-ion Battery Thermal Runaway Propagation Through the Use of Phase Change Materials Containing Expanded Graphite

Authors: Jayson Cheyne, David Butler, Iain Bomphray

Abstract:

In recent years, lithium-ion batteries have been used increasingly for electric vehicles and large energy storage systems due to their high-power density and long lifespan. Despite this, thermal runaway remains a significant safety problem because of its uncontrollable and irreversible nature - which can lead to fires and explosions. In large-scale lithium-ion packs and modules, thermal runaway propagation between cells can escalate fire hazards and cause significant damage. Thus, safety measures are required to mitigate thermal runaway propagation. The current research explores composite phase change materials (PCM) containing expanded graphite (EG) for thermal runaway mitigation. PCMs are an area of significant interest for battery thermal management due to their ability to absorb substantial quantities of heat during phase change. Moreover, the introduction of EG can support heat transfer from the cells to the PCM (owing to its high thermal conductivity) and provide shape stability to the PCM during phase change. During the research, a thermal model was established for an array of 16 cylindrical cells to simulate heat dissipation with and without the composite PCM. Two conditions were modeled, including the behavior during charge/discharge cycles (i.e., throughout regular operation) and thermal runaway. Furthermore, parameters including cell spacing, composite PCM thickness, and EG weight percentage (WT%) were varied to establish the optimal material parameters for enabling thermal runaway mitigation and effective thermal management. Although numerical modeling is still ongoing, initial findings suggest that a 3mm PCM containing 15WT% EG can effectively suppress thermal runaway propagation while maintaining shape stability. The next step in the research is to validate the model through controlled experimental tests. Additionally, with the perceived fire safety concerns relating to PCM materials, fire safety tests, including UL-94 and Limiting Oxygen Index (LOI), shall be conducted to explore the flammability risk.

Keywords: battery safety, electric vehicles, phase change materials, thermal management, thermal runaway

Procedia PDF Downloads 119
987 Delineation of Oil– Polluted Sites in Ibeno LGA, Nigeria

Authors: Ime R. Udotong, Ofonime U. M. John, Justina I. R. Udotong

Abstract:

Ibeno, Nigeria hosts the operational base of Mobil Producing Nigeria Unlimited (MPNU), a subsidiary of ExxonMobil and the current highest oil and condensate producer in Nigeria. Besides MPNU, other multinational oil companies like Shell Petroleum Development Company Ltd, Elf Petroleum Nigeria Ltd and Nigerian Agip Energy, a subsidiary of ENI E&P operate onshore, on the continental shelf and deep offshore of the Atlantic Ocean in Ibeno, Nigeria, respectively. This study was designed to carry out the survey of the oil impacted sites in Ibeno, Nigeria. A combinations of electrical resistivity (ER), ground penetrating radar (GPR) and physico-chemical as well as microbiological characterization of soils and water samples from the area were carried out. Results obtained revealed that there have been hydrocarbon contaminations of this environment by past crude oil spills as observed from significant concentrations of THC, BTEX and heavy metal contents in the environment. Also, high resistivity values and GPR profiles clearly showing the distribution, thickness and lateral extent of hydrocarbon contamination as represented on the radargram reflector tones corroborates previous significant oil input. Contaminations were of varying degrees, ranging from slight to high, indicating levels of substantial attenuation of crude oil contamination over time. Hydrocarbon pollution of the study area was confirmed by the results of soil and water physico-chemical and microbiological analysis. The levels of THC contamination observed in this study are indicative of high levels of crude oil contamination. Moreover, the display of relatively lower resistivities of locations outside the impacted areas compared to resistivity values within the impacted areas, the 3-D Cartesian images of oil contaminant plume depicted by red, light brown and magenta for high, low and very low oil impacted areas, respectively as well as the high counts of hydrocarbonoclastic microorganisms in excess of 1% confirmed significant recent pollution of the study area.

Keywords: oil-polluted sites, physico-chemical analyses, microbiological characterization, geotechnical investigations, total hydrocarbon content

Procedia PDF Downloads 382
986 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 218
985 Zinc Oxide Nanoparticle-Doped Poly (8-Anilino-1-Napthalene Sulphonic Acid/Nat Nanobiosensors for TB Drugs

Authors: Rachel Fanelwa Ajayi, Anovuyo Jonnas, Emmanuel I. Iwuoha

Abstract:

Tuberculosis (TB) is an infectious disease caused by the bacterium (Mycobacterium tuberculosis) which has a predilection for lung tissue due to its rich oxygen supply. The mycobacterial cell has a unique innate characteristic which allows it to resist human immune systems and drug treatments; hence, it is one of the most difficult of all bacterial infections to treat, let alone to cure. At the same time, multi-drug resistance TB (MDR-TB) caused by poorly managed TB treatment, is a growing problem and requires the administration of expensive and less effective second line drugs which take much longer treatment duration than fist line drugs. Therefore, to acknowledge the issues of patients falling ill as a result of inappropriate dosing of treatment and inadequate treatment administration, a device with a fast response time coupled with enhanced performance and increased sensitivity is essential. This study involved the synthesis of electroactive platforms for application in the development of nano-biosensors suitable for the appropriate dosing of clinically diagnosed patients by promptly quantifying the levels of the TB drug; Isonaizid. These nano-biosensors systems were developed on gold surfaces using the enzyme N-acetyletransferase 2 coupled to the cysteamine modified poly(8-anilino-1-napthalene sulphonic acid)/zinc oxide nanocomposites. The morphology of ZnO nanoparticles, PANSA/ZnO nano-composite and nano-biosensors platforms were characterized using High-Resolution Transmission Electron Microscopy (HRTEM) and High-Resolution Scanning Electron Microscopy (HRSEM). On the other hand, the elemental composition of the developed nanocomposites and nano-biosensors were studied using Fourier Transform Infra-Red Spectroscopy (FTIR) and Energy Dispersive X-Ray (EDX). The electrochemical studies showed an increase in electron conductivity for the PANSA/ZnO nanocomposite which was an indication that it was suitable as a platform towards biosensor development.

Keywords: N-acetyletransferase 2, isonaizid, tuberculosis, zinc oxide

Procedia PDF Downloads 360
984 Adsorption and Desorption Behavior of Ionic and Nonionic Surfactants on Polymer Surfaces

Authors: Giulia Magi Meconi, Nicholas Ballard, José M. Asua, Ronen Zangi

Abstract:

Experimental and computational studies are combined to elucidate the adsorption proprieties of ionic and nonionic surfactants on hydrophobic polymer surface such us poly(styrene). To present these two types of surfactants, sodium dodecyl sulfate and poly(ethylene glycol)-block-poly(ethylene), commonly utilized in emulsion polymerization, are chosen. By applying quartz crystal microbalance with dissipation monitoring it is found that, at low surfactant concentrations, it is easier to desorb (as measured by rate) ionic surfactants than nonionic surfactants. From molecular dynamics simulations, the effective, attractive force of these nonionic surfactants to the surface increases with the decrease of their concentration, whereas, the ionic surfactant exhibits mildly the opposite trend. The contrasting behavior of ionic and nonionic surfactants critically relies on two observations obtained from the simulations. The first is that there is a large degree of interweavement between head and tails groups in the adsorbed layer formed by the nonionic surfactant (PEO/PE systems). The second is that water molecules penetrate this layer. In the disordered layer, these nonionic surfactants generate at the surface, only oxygens of the head groups present at the interface with the water phase or oxygens next to the penetrating waters can form hydrogen bonds. Oxygens inside this layer lose this favorable energy, with a magnitude that increases with the surfactants density at the interface. This reduced stability of the surfactants diminishes their driving force for adsorption. All that is shown to be in accordance with experimental results on the dynamics of surfactants desorption. Ionic surfactants assemble into an ordered structure and the attraction to the surface was even slightly augmented at higher surfactant concentration, in agreement with the experimentally determined adsorption isotherm. The reason these two types of surfactants behave differently is because the ionic surfactant has a small head group that is strongly hydrophilic, whereas the head groups of the nonionic surfactants are large and only weakly attracted to water.

Keywords: emulsion polymerization process, molecular dynamics simulations, polymer surface, surfactants adsorption

Procedia PDF Downloads 330
983 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 372
982 Digital Twin for University Campus: Workflow, Applications and Benefits

Authors: Frederico Fialho Teixeira, Islam Mashaly, Maryam Shafiei, Jurij Karlovsek

Abstract:

The ubiquity of data gathering and smart technologies, advancements in virtual technologies, and the development of the internet of things (IoT) have created urgent demands for the development of frameworks and efficient workflows for data collection, visualisation, and analysis. Digital twin, in different scales of the city into the building, allows for bringing together data from different sources to generate fundamental and illuminating insights for the management of current facilities and the lifecycle of amenities as well as improvement of the performance of current and future designs. Over the past two decades, there has been growing interest in the topic of digital twin and their applications in city and building scales. Most such studies look at the urban environment through a homogeneous or generalist lens and lack specificity in particular characteristics or identities, which define an urban university campus. Bridging this knowledge gap, this paper offers a framework for developing a digital twin for a university campus that, with some modifications, could provide insights for any large-scale digital twin settings like towns and cities. It showcases how currently unused data could be purposefully combined, interpolated and visualised for producing analysis-ready data (such as flood or energy simulations or functional and occupancy maps), highlighting the potential applications of such a framework for campus planning and policymaking. The research integrates campus-level data layers into one spatial information repository and casts light on critical data clusters for the digital twin at the campus level. The paper also seeks to raise insightful and directive questions on how digital twin for campus can be extrapolated to city-scale digital twin. The outcomes of the paper, thus, inform future projects for the development of large-scale digital twin as well as urban and architectural researchers on potential applications of digital twin in future design, management, and sustainable planning, to predict problems, calculate risks, decrease management costs, and improve performance.

Keywords: digital twin, smart campus, framework, data collection, point cloud

Procedia PDF Downloads 60
981 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 306
980 Using Group Concept Mapping to Identify a Pharmacy-Based Trigger Tool to Detect Adverse Drug Events

Authors: Rodchares Hanrinth, Theerapong Srisil, Peeraya Sriphong, Pawich Paktipat

Abstract:

The trigger tool is the low-cost, low-tech method to detect adverse events through clues called triggers. The Institute for Healthcare Improvement (IHI) has developed the Global Trigger Tool for measuring and preventing adverse events. However, this tool is not specific for detecting adverse drug events. The pharmacy-based trigger tool is needed to detect adverse drug events (ADEs). Group concept mapping is an effective method for conceptualizing various ideas from diverse stakeholders. This technique was used to identify a pharmacy-based trigger to detect adverse drug events (ADEs). The aim of this study was to involve the pharmacists in conceptualizing, developing, and prioritizing a feasible trigger tool to detect adverse drug events in a provincial hospital, the northeastern part of Thailand. The study was conducted during the 6-month period between April 1 and September 30, 2017. Study participants involved 20 pharmacists (17 hospital pharmacists and 3 pharmacy lecturers) engaging in three concept mapping workshops. In this meeting, the concept mapping technique created by Trochim, a highly constructed qualitative group technic for idea generating and sharing, was used to produce and construct participants' views on what triggers were potential to detect ADEs. During the workshops, participants (n = 20) were asked to individually rate the feasibility and potentiality of each trigger and to group them into relevant categories to enable multidimensional scaling and hierarchical cluster analysis. The outputs of analysis included the trigger list, cluster list, point map, point rating map, cluster map, and cluster rating map. The three workshops together resulted in 21 different triggers that were structured in a framework forming 5 clusters: drug allergy, drugs induced diseases, dosage adjustment in renal diseases, potassium concerning, and drug overdose. The first cluster is drug allergy such as the doctor’s orders for dexamethasone injection combined with chlorpheniramine injection. Later, the diagnosis of drug-induced hepatitis in a patient taking anti-tuberculosis drugs is one trigger in the ‘drugs induced diseases’ cluster. Then, for the third cluster, the doctor’s orders for enalapril combined with ibuprofen in a patient with chronic kidney disease is the example of a trigger. The doctor’s orders for digoxin in a patient with hypokalemia is a trigger in a cluster. Finally, the doctor’s orders for naloxone with narcotic overdose was classified as a trigger in a cluster. This study generated triggers that are similar to some of IHI Global trigger tool, especially in the medication module such as drug allergy and drug overdose. However, there are some specific aspects of this tool, including drug-induced diseases, dosage adjustment in renal diseases, and potassium concerning which do not contain in any trigger tools. The pharmacy-based trigger tool is suitable for pharmacists in hospitals to detect potential adverse drug events using clues of triggers.

Keywords: adverse drug events, concept mapping, hospital, pharmacy-based trigger tool

Procedia PDF Downloads 153
979 Study of the Uncertainty Behaviour for the Specific Total Enthalpy of the Hypersonic Plasma Wind Tunnel Scirocco at Italian Aerospace Research Center

Authors: Adolfo Martucci, Iulian Mihai

Abstract:

By means of the expansion through a Conical Nozzle and the low pressure inside the Test Chamber, a large hypersonic stable flow takes place for a duration of up to 30 minutes. Downstream the Test Chamber, the diffuser has the function of reducing the flow velocity to subsonic values, and as a consequence, the temperature increases again. In order to cool down the flow, a heat exchanger is present at the end of the diffuser. The Vacuum System generates the necessary vacuum conditions for the correct hypersonic flow generation, and the DeNOx system, which follows the Vacuum System, reduces the nitrogen oxide concentrations created inside the plasma flow behind the limits imposed by Italian law. This very large, powerful, and complex facility allows researchers and engineers to reproduce entire re-entry trajectories of space vehicles into the atmosphere. One of the most important parameters for a hypersonic flowfield representative of re-entry conditions is the specific total enthalpy. This is the whole energy content of the fluid, and it represents how severe could be the conditions around a spacecraft re-entering from a space mission or, in our case, inside a hypersonic wind tunnel. It is possible to reach very high values of enthalpy (up to 45 MJ/kg) that, together with the large allowable size of the models, represent huge possibilities for making on-ground experiments regarding the atmospheric re-entry field. The maximum nozzle exit section diameter is 1950 mm, where values of Mach number very much higher than 1 can be reached. The specific total enthalpy is evaluated by means of a number of measurements, each of them concurring with its value and its uncertainty. The scope of the present paper is the evaluation of the sensibility of the uncertainty of the specific total enthalpy versus all the parameters and measurements involved. The sensors that, if improved, could give the highest advantages have so been individuated. Several simulations in Python with the METAS library and by means of Monte Carlo simulations are presented together with the obtained results and discussions about them.

Keywords: hypersonic, uncertainty, enthalpy, simulations

Procedia PDF Downloads 81
978 Review and Comparison of Iran`s Sixteenth Topic of the Building with the Ranking System of the Water Sector Lead to Improve the Criteria of the Sixteenth Topic

Authors: O. Fatemi

Abstract:

Considering growing building construction industry in developing countries and sustainable development concept, as well as the importance of taking care of the future generations, codifying buildings scoring system based on environmental criteria, has always been a subject for discussion. The existing systems cannot be used for all the regions due to several reasons, including but not limited to variety in regional variables. In this article, the most important common LEED (Leadership in Energy and Environmental Design) and BREEAM (Building Research Establishment Environmental Assessment Method) common and Global environmental scoring systems, used in UK, USA, and Japan, respectively, have been discussed and compared with a special focus on CASBEE (Comprehensive Assessment System for Built Environment Efficiency), to credit assigning field (weighing and scores systems) as well as sustainable development criteria in each system. Then, converging and distinct fields of the foregoing systems are examined considering National Iranian Building Code. Furthermore, the common credits in the said systems not mentioned in National Iranian Building Code have been identified. These credits, which are generally included in well-known fundamental principles in sustainable development, may be considered as offered options for the Iranian building environmental scoring system. It is suggested that one of the globally and commonly accepted systems is chosen considering national priorities in order to offer an effective method for buildings environmental scoring, and then, a part of credits is added and/or removed, or a certain credit score is changed, and eventually, a new scoring system with a new title is developed for the country. Evidently, building construction industry highly affects the environment, economy, efficiency, and health of the relevant occupants. Considering the growing trend of cities and construction, achieving building scoring systems based on environmental criteria has always been a matter of discussion. The existing systems cannot be used for all the regions due to several reasons, including but not limited to variety in regional variables.

Keywords: scoring system, sustainability assessment, water efficiency, national Iranian building code

Procedia PDF Downloads 167
977 Ultrasound-Mediated Separation of Ethanol, Methanol, and Butanol from Their Aqueous Solutions

Authors: Ozan Kahraman, Hao Feng

Abstract:

Ultrasonic atomization (UA) is a useful technique for producing a liquid spray for various processes, such as spray drying. Ultrasound generates small droplets (a few microns in diameter) by disintegration of the liquid via cavitation and/or capillary waves, with low range velocity and narrow droplet size distribution. In recent years, UA has been investigated as an alternative for enabling or enhancing ultrasound-mediated unit operations, such as evaporation, separation, and purification. The previous studies on the UA separation of a solvent from a bulk solution were limited to ethanol-water systems. More investigations into ultrasound-mediated separation for other liquid systems are needed to elucidate the separation mechanism. This study was undertaken to investigate the effects of the operational parameters on the ultrasound-mediated separation of three miscible liquid pairs: ethanol-, methanol-, and butanol-water. A 2.4 MHz ultrasonic mister with a diameter of 18 mm and rating power of 24 W was installed on the bottom of a custom-designed cylindrical separation unit. Air was supplied to the unit (3 to 4 L/min.) as a carrier gas to collect the mist. The effects of the initial alcohol concentration, viscosity, and temperature (10, 30 and 50°C) on the atomization rates were evaluated. The alcohol concentration in the collected mist was measured with high performance liquid chromatography and a refractometer. The viscosity of the solutions was determined using a Brookfield digital viscometer. The alcohol concentration of the atomized mist was dependent on the feed concentration, feed rate, viscosity, and temperature. Increasing the temperature of the alcohol-water mixtures from 10 to 50°C increased the vapor pressure of both the alcohols and water, resulting in an increase in the atomization rates but a decrease in the separation efficiency. The alcohol concentration in the mist was higher than that of the alcohol-water equilibrium at all three temperatures. More importantly, for ethanol, the ethanol concentration in the mist went beyond the azeotropic point, which cannot be achieved by conventional distillation. Ultrasound-mediated separation is a promising non-equilibrium method for separating and purifying alcohols, which may result in significant energy reductions and process intensification.

Keywords: azeotropic mixtures, distillation, evaporation, purification, seperation, ultrasonic atomization

Procedia PDF Downloads 165
976 Exercise in Extreme Conditions: Leg Cooling and Fat/Carbohydrate Utilization

Authors: Anastasios Rodis

Abstract:

Background: Case studies of walkers, climbers, and campers exposed to cold and wet conditions without limb water/windproof protection revealed experiences of muscle weakness and fatigue. It is reasonable to assume that a part of the fatigue could occur due to an alteration in substrate utilization, since reduction of performance in extreme cold conditions, may partially be explained by higher anaerobic glycolysis, reflecting higher carbohydrate oxidation and an increase accumulation rate of blood lactate. The aim of this study was to assess the effects of pre-exercise lower limb cooling on substrate utilization rate during sub-maximal exercise. Method: Six male university students (mean (SD): age, 21.3 (1.0) yr; maximal oxygen uptake (V0₂ max), 49.6 (3.6) ml.min⁻¹; and percentage of body fat, 13.6 (2.5) % were examined in random order after either 30min cold water (12°C) immersion utilized as the cooling strategy up to the gluteal fold, or under control conditions (no precooling), with tests separated by minimum of 7 days. Exercise consisted of 60min cycling at 50% V0₂ max, in a thermoneutral environment of 20°C. Subjects were also required to record a diet diary over the 24hrs prior to the each trial. Means (SD) for the three macronutrients during the 1 day prior to each trial (expressed as a percentage of total energy) 52 (3) % carbohydrate, 31 (4) % fat, and 17 (± 2) % protein. Results: The following responses to lower limb cooling relative to control trial during exercise were: 1) Carbohydrate (CHO) oxidation, and blood lactate (Bₗₐc) concentration were significantly higher (P < 0.05); 2) rectal temperature (Tᵣₑc) was significantly higher (P < 0.05), but skin temperature was significantly lower (P < 0.05); no significant differences were found in blood glucose (Bg), heart rate (HR) and oxygen consumption (V0₂). Discussion: These data suggested that lower limb cooling prior to submaximal exercise will shift metabolic processes from Fat oxidation to CHO oxidation. This shift from Fat to CHO oxidation will probably have important implications in the surviving scenario, since people facing accidental localized cooling of their limbs either through wading/falling in cold water or snow even if they do not perform high intensity activity, they have to rely on CHO availability.

Keywords: exercise in wet conditions, leg cooling, outdoors exercise, substrate utilization

Procedia PDF Downloads 430
975 Impact of Traffic Restrictions due to Covid19, on Emissions from Freight Transport in Mexico City

Authors: Oscar Nieto-Garzón, Angélica Lozano

Abstract:

In urban areas, on-road freight transportation creates several social and environmental externalities. Then, it is crucial that freight transport considers not only economic aspects, like retailer distribution cost reduction and service improvement, but also environmental effects such as global CO2 and local emissions (e.g. Particulate Matter, NOX, CO) and noise. Inadequate infrastructure development, high rate of urbanization, the increase of motorization, and the lack of transportation planning are characteristics that urban areas from developing countries share. The Metropolitan Area of Mexico City (MAMC), the Metropolitan Area of São Paulo (MASP), and Bogota are three of the largest urban areas in Latin America where air pollution is often a problem associated with emissions from mobile sources. The effect of the lockdown due to COVID-19 was analyzedfor these urban areas, comparing the same period (January to August) of years 2016 – 2019 with 2020. A strong reduction in the concentration of primary criteria pollutants emitted by road traffic were observed at the beginning of 2020 and after the lockdown measures.Daily mean concentration of NOx decreased 40% in the MAMC, 34% in the MASP, and 62% in Bogota. Daily mean ozone levels increased after the lockdown measures in the three urban areas, 25% in MAMC, 30% in the MASP and 60% in Bogota. These changes in emission patterns from mobile sources drastically changed the ambient atmospheric concentrations of CO and NOX. The CO/NOX ratioat the morning hours is often used as an indicator of mobile sources emissions. In 2020, traffic from cars and light vehicles was significantly reduced due to the first lockdown, but buses and trucks had not restrictions. In theory, it implies a decrease in CO and NOX from cars or light vehicles, maintaining the levels of NOX by trucks(or lower levels due to the congestion reduction). At rush hours, traffic was reduced between 50% and 75%, so trucks could get higher speeds, which would reduce their emissions. By means an emission model, it was found that an increase in the average speed (75%) would reduce the emissions (CO, NOX, and PM) from diesel trucks by up to 30%. It was expected that the value of CO/NOXratio could change due to thelockdownrestrictions. However, although there was asignificant reduction of traffic, CO/NOX kept its trend, decreasing to 8-9 in 2020. Hence, traffic restrictions had no impact on the CO/NOX ratio, although they did reduce vehicle emissions of CO and NOX. Therefore, these emissions may not adequately represent the change in the vehicle emission patterns, or this ratio may not be a good indicator of emissions generated by vehicles. From the comparison of the theoretical data and those observed during the lockdown, results that the real NOX reduction was lower than the theoretical reduction. The reasons could be that there are other sources of NOX emissions, so there would be an over-representation of NOX emissions generated by diesel vehicles, or there is an underestimation of CO emissions. Further analysis needs to consider this ratioto evaluate the emission inventories and then to extend these results forthe determination of emission control policies to non-mobile sources.

Keywords: COVID-19, emissions, freight transport, latin American metropolis

Procedia PDF Downloads 126
974 Ramadan as a Model of Intermittent Fasting: Effects on Gut Hormones, Appetite and Body Composition in Diabetes vs. Controls

Authors: Turki J. Alharbi, Jencia Wong, Dennis Yue, Tania P. Markovic, Julie Hetherington, Ted Wu, Belinda Brooks, Radhika Seimon, Alice Gibson, Stephanie L. Silviera, Amanda Sainsbury, Tanya J. Little

Abstract:

Fasting has been practiced for centuries and is incorporated into the practices of different religions including Islam, whose followers intermittently fast throughout the month of Ramadan. Thus, Ramadan presents a unique model of prolonged intermittent fasting (IF). Despite a growing body of evidence for a cardio-metabolic and endocrine benefit of IF, detailed studies of the effects of IF on these indices in type 2 diabetes are scarce. We studied 5 subjects with type 2 diabetes (T2DM) and 7 healthy controls (C) at baseline (pre), and in the last week of Ramadan (post). Fasting circulating levels of glucose, HbA1c and lipids, as well as body composition (with DXA) and resting energy expenditure (REE) were measured. Plasma gut hormone levels and appetite responses to a mixed meal were also studied. Data are means±SEM. Ramadan decreased total fat mass (-907±92 g, p=0.001) and trunk fat (-778±190 g, p=0.014) in T2DM but not in controls, without any reductions in lean mass or REE. There was a trend towards a decline in plasma FFA in both groups. Ramadan had no effect on body weight, glycemia, blood pressure, or plasma lipids in either group. In T2DM only, the area under the curve for post-meal plasma ghrelin concentrations increased after Ramadan (pre:6632±1737 vs. post:9025±2518 pg/ml.min-1, p=0.045). Despite this increase in orexigenic ghrelin, subjective appetite scores were not altered by Ramadan. Meal-induced plasma concentrations of the satiety hormone pancreatic polypeptide did not change during Ramadan, but were higher in T2DM compared to controls (post: C: 23486±6677 vs. T2DM: 62193±6880 pg/ml.min-1, p=0.003. In conclusion, Ramadan, as a model for IF appears to have more favourable effects on body composition in T2DM, without adverse effects on metabolic control or subjective appetite. These data suggest that IF may be particularly beneficial in T2DM as a nutritional intervention. Larger studies are warranted.

Keywords: type 2 diabetes, obesity, intermittent fasting, appetite regulating hormones

Procedia PDF Downloads 303
973 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 255
972 The Influence of Age and Education on Patients' Attitudes Towards Contraceptives in Rural California

Authors: Shivani Thakur, Jasmin Dominguez Cervantes, Ahmed Zabiba, Fatima Zabiba, Sandhini Agarwal, Kamalpreet Kaur, Hussein Maatouk, Shae Chand, Omar Madriz, Tiffany Huang, Saloni Bansal

Abstract:

Contraceptives are an effective public health achievement, allowing for family planning and reducing the risk of sexually transmitted diseases (STDs). California’s rural Central Valley has high rates of teenage pregnancy and STDs. Factors affecting contraceptive usage here may include religious concerns, financial issues, and regional variations in the accessibility and availability of contraceptives. The increasing population and diversity of the Central Valley make the understanding of the determinants of unintended pregnancy and STDs increasingly nuanced. Patients in California’s Central Valley were surveyed at 6 surgical clinics to assess attitudes toward contraceptives. The questionnaire consisted of demographics and 14 Likert-scale statements investigating patients’ feelings regarding contraceptives. Parametric and non-parametric analysis was performed on the Likert statements. A correlation matrix for the Likert-scale statements was used to evaluate the strength of the relationship between each question. 76 patients aged 18-75 years completed the questionnaire. 90% of the participants were female, 76% Hispanic, 36% married, 44% with an income range between 30-60K, and 83% were between childbearing ages. 60% of participants stated they are currently using or had used some type of contraceptive. 25% of participants had at least one unplanned pregnancy. The most common type of contraceptives used were oral contraceptives(28%) and condoms(38%). The top reasons for patients’ contraceptive usage were: prevention of pregnancy (72%), safe sex/prevention of STDs (32%), and regulation of menstrual cycle (19%). Further analysis of Likert responses revealed that contraception usage increased due to approval of contraceptives (x̄=3.98, σ =1.02); partner approval of contraceptives (x̄=3.875, σ =1.16); and reduced anxiety about pregnancy (x̄=3.875, σ =1.23). Younger females (18-34 years old) agreed more with the statement that the cost of contraceptive supplies is too expensive than older females (35-75 years old), (x̄=3.2, σ = 1.4 vs x̄=2.8, σ =1.3, p<0.05). Younger females (44%) were also more likely to use short-acting contraceptive methods (oral and male condoms) compared to older females (64%) who use long-acting methods (implants/ intrauterine devices). 51% of Hispanic females were using some type of contraceptive. Of those Hispanic females who do not use contraceptives, 33% stated having no children, and all plan to have at least one child in the future. 35% of participants had a bachelor's degree. Those with bachelor’s degrees were more likely to use contraceptives, 58% vs 51%, p<0.05, and less likely to have unplanned pregnancy, 50% vs. 12%, p<0.01. There is increasing use and awareness among patients in rural settings concerning contraceptives. Our finding shows that younger women and women with higher educational attainment tend to have more positive attitudes towards the use of contraceptives. This work gives physicians an understanding of patients’ concerns about contraceptive methods and offers insight into culturally competent intervention programs that respect individual values.

Keywords: contraceptives, public health, rural california, women of child baring age

Procedia PDF Downloads 48
971 The Efficacy of Class IV Diode Laser in the Treatment of Patients with Chronic Neck Pain: A Randomized Controlled Trial

Authors: Mohamed Salaheldien Mohamed Alayat, Ahmed Mohamed Elsoudany, Roaa Abdulghani Sroge, Bayan Muteb Aldhahwani

Abstract:

Background: Neck pain is a common illness that could affect individual’s daily activities. Class IV laser with longer wavelength can stimulate tissues and penetrate more than the classic low-level laser therapy. Objectives: The aim of the study was to investigate the efficacy of class IV diode laser in the treatment of patients with chronic neck pain (CNP). Methods: Fifty-two patients participated and completed the study. Their mean age (SD) was 50.7 (6.2). Patients were randomized into two groups and treated with laser plus exercise (laser + EX) group and placebo laser plus exercise (PL+EX) group. Treatment was performed by Class IV laser in two phases; scanning and trigger point phases. Scanning to the posterior neck and shoulder girdle region with 4 J/cm2 with a total energy of 300 J applied to 75 cm2 in 4 minutes and 16 seconds. Eight trigger points on the posterior neck area were treated by 4 J/cm2 and the time of application was in 30 seconds. Both groups received exercise two times per week for 4 weeks. Exercises included range of motion, isometric, stretching, isotonic resisted exercises to the cervical extensors, lateral bending and rotators muscles with postural correction exercises. The measured variables were pain level using visual analogue scale (VAS), and neck functional activity using neck disability index (NDI) score. Measurements were taken at baseline and after 4 weeks of treatment. The level of statistical significance was set as p < 0.05. Results: There were significant decreases in post-treatment VAS and NDI in both groups as compared to baseline values. Laser + EX effectively decreased VAS (mean difference -6.5, p = 0.01) and NDI scores after (mean difference -41.3, p = 0.01) 4 weeks of treatment compared to PL + EX. Conclusion: Class IV laser combined with exercise is effective treatment for patients with CNP as compared to PL + EX therapy. The combination of laser + EX effectively increased functional activity and reduced pain after 4 weeks of treatment.

Keywords: chronic neck pain, class IV laser, exercises, neck disability index, visual analogue scale

Procedia PDF Downloads 298