Search results for: intermediate input source
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7085

Search results for: intermediate input source

1205 Performance Analysis of Three Absorption Heat Pump Cycles, Full and Partial Loads Operations

Authors: B. Dehghan, T. Toppi, M. Aprile, M. Motta

Abstract:

The environmental concerns related to global warming and ozone layer depletion along with the growing worldwide demand for heating and cooling have brought an increasing attention toward ecological and efficient Heating, Ventilation, and Air Conditioning (HVAC) systems. Furthermore, since space heating accounts for a considerable part of the European primary/final energy use, it has been identified as one of the sectors with the most challenging targets in energy use reduction. Heat pumps are commonly considered as a technology able to contribute to the achievement of the targets. Current research focuses on the full load operation and seasonal performance assessment of three gas-driven absorption heat pump cycles. To do this, investigations of the gas-driven air-source ammonia-water absorption heat pump systems for small-scale space heating applications are presented. For each of the presented cycles, both full-load under various temperature conditions and seasonal performances are predicted by means of numerical simulations. It has been considered that small capacity appliances are usually equipped with fixed geometry restrictors, meaning that the solution mass flow rate is driven by the pressure difference across the associated restrictor valve. Results show that gas utilization efficiency (GUE) of the cycles varies between 1.2 and 1.7 for both full and partial loads and vapor exchange (VX) cycle is found to achieve the highest efficiency. It is noticed that, for typical space heating applications, heat pumps operate over a wide range of capacities and thermal lifts. Thus, partially, the novelty introduced in the paper is the investigation based on a seasonal performance approach, following the method prescribed in a recent European standard (EN 12309). The overall result is a modest variation in the seasonal performance for analyzed cycles, from 1.427 (single-effect) to 1.493 (vapor-exchange).

Keywords: absorption cycles, gas utilization efficiency, heat pump, seasonal performance, vapor exchange cycle

Procedia PDF Downloads 102
1204 Hydrological-Economic Modeling of Two Hydrographic Basins of the Coast of Peru

Authors: Julio Jesus Salazar, Manuel Andres Jesus De Lama

Abstract:

There are very few models that serve to analyze the use of water in the socio-economic process. On the supply side, the joint use of groundwater has been considered in addition to the simple limits on the availability of surface water. In addition, we have worked on waterlogging and the effects on water quality (mainly salinity). In this paper, a 'complex' water economy is examined; one in which demands grow differentially not only within but also between sectors, and one in which there are limited opportunities to increase consumptive use. In particular, high-value growth, the growth of the production of irrigated crops of high value within the basins of the case study, together with the rapidly growing urban areas, provides a rich context to examine the general problem of water management at the basin level. At the same time, the long-term aridity of nature has made the eco-environment in the basins located on the coast of Peru very vulnerable, and the exploitation and immediate use of water resources have further deteriorated the situation. The presented methodology is the optimization with embedded simulation. The wide basin simulation of flow and water balances and crop growth are embedded with the optimization of water allocation, reservoir operation, and irrigation scheduling. The modeling framework is developed from a network of river basins that includes multiple nodes of origin (reservoirs, aquifers, water courses, etc.) and multiple demand sites along the river, including places of consumptive use for agricultural, municipal and industrial, and uses of running water on the coast of Peru. The economic benefits associated with water use are evaluated for different demand management instruments, including water rights, based on the production and benefit functions of water use in the urban agricultural and industrial sectors. This work represents a new effort to analyze the use of water at the regional level and to evaluate the modernization of the integrated management of water resources and socio-economic territorial development in Peru. It will also allow the establishment of policies to improve the process of implementation of the integrated management and development of water resources. The input-output analysis is essential to present a theory about the production process, which is based on a particular type of production function. Also, this work presents the Computable General Equilibrium (CGE) version of the economic model for water resource policy analysis, which was specifically designed for analyzing large-scale water management. As to the platform for CGE simulation, GEMPACK, a flexible system for solving CGE models, is used for formulating and solving CGE model through the percentage-change approach. GEMPACK automates the process of translating the model specification into a model solution program.

Keywords: water economy, simulation, modeling, integration

Procedia PDF Downloads 144
1203 India’s Developmental Assistance in Africa: Analyzing India’s Aid and Developmental Projects

Authors: Daniel Gidey, Kunwar Siddarth Dadhwal

Abstract:

By evaluating India's aid systems and ongoing development initiatives, this conference paper offers light on India's role as a source of developmental assistance in Africa. This research attempts to provide insights into the developing landscape of foreign aid and development cooperation by focusing on understanding India's motivations and strategy. In recent years, India's connection with Africa has grown significantly, driven by economic, political, and strategic reasons. This conference paper covers India's many forms of aid, including financial, capacity building efforts, technical assistance, and infrastructure development projects, via a thorough investigation. The article seeks to establish India's priorities and highlight the possible impacts of its development assistance in Africa by examining the industries and locations of concentration. Using secondary data sources, the investigation delves into the underlying goals of India's aid policy in Africa. It investigates whether India's development assistance is consistent with its broader geopolitical aims, such as access to resources, competing with regional rivals, or strengthening diplomatic ties. Furthermore, the article investigates how India's aid policy combines the ideals of South-South cooperation and mutual development, as well as the ramifications for recipient countries. Furthermore, the paper assesses the efficacy and sustainability of India's aid operations in Africa. It takes into account the elements that influence their success, the problems they face, and the amount to which they contribute to local development goals, community empowerment, and poverty alleviation. The study also focuses on the accountability systems, transparency, and knowledge transfer aspects of India's development assistance. By providing a detailed examination of India's aid endeavors in Africa, the paper adds to the current literature on international development cooperation. By offering fresh insights into the motives, strategies, and impacts of India's assistance programs, it seeks to enhance understanding of the emerging patterns in South-South cooperation and the complex dynamics of contemporary international aid architecture.

Keywords: India, Africa, developmental assistance, aid projects and South-South cooperation

Procedia PDF Downloads 51
1202 Image Processing-Based Maize Disease Detection Using Mobile Application

Authors: Nathenal Thomas

Abstract:

In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.

Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot

Procedia PDF Downloads 68
1201 Critical Assessment of Herbal Medicine Usage and Efficacy by Pharmacy Students

Authors: Anton V. Dolzhenko, Tahir Mehmood Khan

Abstract:

An ability to make an evidence-based decision is a critically important skill required for practicing pharmacists. The development of this skill is incorporated into the pharmacy curriculum. We aimed in our study to estimate perception of pharmacy students regarding herbal medicines and their ability to assess information on herbal medicines professionally. The current Monash University curriculum in Pharmacy does not provide comprehensive study material on herbal medicines and students should find their way to find information, assess its quality and make a professional decision. In the Pharmacy course, students are trained how to apply this process to conventional medicines. In our survey of 93 undergraduate students from year 1-4 of Pharmacy course at Monash University Malaysia, we found that students’ view on herbal medicines is sometimes associated with common beliefs, which affect students’ ability to make evidence-based conclusions regarding the therapeutic potential of herbal medicines. The use of herbal medicines is widespread and 95.7% of the participated students have prior experience of using them. In the scale 1 to 10, students rated the importance of acquiring herbal medicine knowledge for them as 8.1±1.6. More than half (54.9%) agreed that herbal medicines have the same clinical significance as conventional medicines in treating diseases. Even more, students agreed that healthcare settings should give equal importance to both conventional and herbal medicine use (80.6%) and that herbal medicines should comply with strict quality control procedures as conventional medicines (84.9%). The latter statement also indicates that students consider safety issues associated with the use of herbal medicines seriously. It was further confirmed by 94.6% of students saying that the safety and toxicity information on herbs and spices are important to pharmacists and 95.7% of students admitting that drug-herb interactions may affect therapeutic outcome. Only 36.5% of students consider herbal medicines as s safer alternative to conventional medicines. The students use information on herbal medicines from various sources and media. Most of the students (81.7%) obtain information on herbal medicines from the Internet and only 20.4% mentioned lectures/workshop/seminars as a source of such information. Therefore, we can conclude that students attained the skills on the critical assessment of therapeutic properties of conventional medicines have a potential to use their skills for evidence-based decisions regarding herbal medicines.

Keywords: evidence-based decision, pharmacy education, student perception, traditional medicines

Procedia PDF Downloads 271
1200 Characterization of the Groundwater Aquifers at El Sadat City by Joint Inversion of VES and TEM Data

Authors: Usama Massoud, Abeer A. Kenawy, El-Said A. Ragab, Abbas M. Abbas, Heba M. El-Kosery

Abstract:

Vertical Electrical Sounding (VES) and Transient Electro Magnetic (TEM) survey have been applied for characterizing the groundwater aquifers at El Sadat industrial area. El-Sadat city is one of the most important industrial cities in Egypt. It has been constructed more than three decades ago at about 80 km northwest of Cairo along the Cairo–Alexandria desert road. Groundwater is the main source of water supplies required for domestic, municipal, and industrial activities in this area due to the lack of surface water sources. So, it is important to maintain this vital resource in order to sustain the development plans of this city. In this study, VES and TEM data were identically measured at 24 stations along three profiles trending NE–SW with the elongation of the study area. The measuring points were arranged in a grid like pattern with both inter-station spacing and line–line distance of about 2 km. After performing the necessary processing steps, the VES and TEM data sets were inverted individually to multi-layer models, followed by a joint inversion of both data sets. Joint inversion process has succeeded to overcome the model-equivalence problem encountered in the inversion of individual data set. Then, the joint models were used for the construction of a number of cross sections and contour maps showing the lateral and vertical distribution of the geo-electrical parameters in the subsurface medium. Interpretation of the obtained results and correlation with the available geological and hydrogeological information revealed TWO aquifer systems in the area. The shallow Pleistocene aquifer consists of sand and gravel saturated with fresh water and exhibits large thickness exceeding 200 m. The deep Pliocene aquifer is composed of clay and sand and shows low resistivity values. The water bearing layer of the Pleistocene aquifer and the upper surface of Pliocene aquifer are continuous and no structural features have cut this continuity through the investigated area.

Keywords: El Sadat city, joint inversion, VES, TEM

Procedia PDF Downloads 361
1199 Groundwater Geophysical Studies in the Developed and Sub-Urban BBMP Area, Bangalore, Karnataka, South India

Authors: G. Venkatesha, Urs Samarth, H. K. Ramaraju, Arun Kumar Sharma

Abstract:

The projection for Groundwater states that the total domestic water demand for greater Bangalore would increase from 1,170 MLD in 2010 to 1,336 MLD in 2016. Dependence on groundwater is ever increasing due to rapid Industrialization & Urbanization. It is estimated that almost 40% of the population of Bangalore is dependent on groundwater. Due to the unscientific disposal of domestic and industrial waste generated, groundwater is getting highly polluted in the city. The scale of this impact will depend mainly upon the water-service infrastructure, the superficial geology and the regional setting. The quality of ground water is equally important as that of quantity. Jointed and fractured granites and gneisses constitute the major aquifer system of BBMP area. Two new observatory Borewells were drilled and lithology report has been prepared. Petrographic Analysis (XRD/XRF) and Water quality Analysis were carried out as per the standard methods. Petrographic samples were analysed by collecting chip of rock from the borewell for every 20ft depth, most of the samples were similar and samples were identified as Biotite-Gneiss, Schistose Amphibolite. Water quality analysis was carried out for individual chemical parameters for two borewells drilled. 1st Borewell struck water at 150ft (Total depth-200ft) & 2nd struck at 740ft (Total depth-960ft). 5 water samples were collected till end of depth in each borewell. Chemical parameter values such as, Total Hardness (360-348, 280-320) mg/ltr, Nitrate (12.24-13.5, 45-48) mg/ltr, Chloride (104-90, 70-70)mg/ltr, Fe (0.75-0.09, 1.288-0.312)mg/ltr etc. are calculated respectively. Water samples were analysed from various parts of BBMP covering 750 sq kms, also thematic maps (IDW method) of water quality is generated for these samples for Post-Monsoon season. The study aims to explore the sub-surface Lithological layers and the thickness of weathered zone, which indirectly helps to know the Groundwater pollution source near surface water bodies, dug wells, etc. The above data are interpreted for future ground water resources planning and management.

Keywords: lithology, petrographic, pollution, urbanization

Procedia PDF Downloads 287
1198 Dust Particle Removal from Air in a Self-Priming Submerged Venturi Scrubber

Authors: Manisha Bal, Remya Chinnamma Jose, B.C. Meikap

Abstract:

Dust particles suspended in air are a major source of air pollution. A self-priming submerged venturi scrubber proven very effective in cases of handling nuclear power plant accidents is an efficient device to remove dust particles from the air and thus aids in pollution control. Venturi scrubbers are compact, have a simple mode of operation, no moving parts, easy to install and maintain when compared to other pollution control devices and can handle high temperatures and corrosive and flammable gases and dust particles. In the present paper, fly ash particles recognized as a high air pollutant substance emitted mostly from thermal power plants is considered as the dust particle. Its exposure through skin contact, inhalation and indigestion can lead to health risks and in severe cases can even root to lung cancer. The main focus of this study is on the removal of fly ash particles from polluted air using a self-priming venturi scrubber in submerged conditions using water as the scrubbing liquid. The venturi scrubber comprising of three sections: converging section, throat and diverging section is submerged inside a water tank. The liquid enters the throat due to the pressure difference composed of the hydrostatic pressure of the liquid and static pressure of the gas. The high velocity dust particles atomize the liquid droplets at the throat and this interaction leads to its absorption into water and thus removal of fly ash from the air. Detailed investigation on the scrubbing of fly ash has been done in this literature. Experiments were conducted at different throat gas velocities, water levels and fly ash inlet concentrations to study the fly ash removal efficiency. From the experimental results, the highest fly ash removal efficiency of 99.78% is achieved at the throat gas velocity of 58 m/s, water level of height 0.77m with fly ash inlet concentration of 0.3 x10⁻³ kg/Nm³ in the submerged condition. The effect of throat gas velocity, water level and fly ash inlet concentration on the removal efficiency has also been evaluated. Furthermore, experimental results of removal efficiency are validated with the developed empirical model.

Keywords: dust particles, fly ash, pollution control, self-priming venturi scrubber

Procedia PDF Downloads 158
1197 Innovation Management in E-Health Care: The Implementation of New Technologies for Health Care in Europe and the USA

Authors: Dariusz M. Trzmielak, William Bradley Zehner, Elin Oftedal, Ilona Lipka-Matusiak

Abstract:

The use of new technologies should create new value for all stakeholders in the healthcare system. The article focuses on demonstrating that technologies or products typically enable new functionality, a higher standard of service, or a higher level of knowledge and competence for clinicians. It also highlights the key benefits that can be achieved through the use of artificial intelligence, such as relieving clinicians of many tasks and enabling the expansion and greater specialisation of healthcare services. The comparative analysis allowed the authors to create a classification of new technologies in e-health according to health needs and benefits for patients, doctors, and healthcare systems, i.e., the main stakeholders in the implementation of new technologies and products in healthcare. The added value of the development of new technologies in healthcare is diagnosed. The work is both theoretical and practical in nature. The primary research methods are bibliographic analysis and analysis of research data and market potential of new solutions for healthcare organisations. The bibliographic analysis is complemented by the author's case studies of implemented technologies, mostly based on artificial intelligence or telemedicine. In the past, patients were often passive recipients, the end point of the service delivery system, rather than stakeholders in the system. One of the dangers of powerful new technologies is that patients may become even more marginalised. Healthcare will be provided and delivered in an increasingly administrative, programmed way. The doctor may also become a robot, carrying out programmed activities - using 'non-human services'. An alternative approach is to put the patient at the centre, using technologies, products, and services that allow them to design and control technologies based on their own needs. An important contribution to the discussion is to open up the different dimensions of the user (carer and patient) and to make them aware of healthcare units implementing new technologies. The authors of this article outline the importance of three types of patients in the successful implementation of new medical solutions. The impact of implemented technologies is analysed based on: 1) "Informed users", who are able to use the technology based on a better understanding of it; 2) "Engaged users" who play an active role in the broader healthcare system as a result of the technology; 3) "Innovative users" who bring their own ideas to the table based on a deeper understanding of healthcare issues. The authors' research hypothesis is that the distinction between informed, engaged, and innovative users has an impact on the perceived and actual quality of healthcare services. The analysis is based on case studies of new solutions implemented in different medical centres. In addition, based on the observations of the Polish author, who is a manager at the largest medical research institute in Poland, with analytical input from American and Norwegian partners, the added value of the implementations for patients, clinicians, and the healthcare system will be demonstrated.

Keywords: innovation, management, medicine, e-health, artificial intelligence

Procedia PDF Downloads 5
1196 Food Poisoning (Salmonellosis) as a Public Health Problem Through Consuming the Meat and Eggs of the Carrier Birds

Authors: M.Younus, M. Athar Khan, Asif Adrees

Abstract:

The present research endeavour was made to investigate the Public Health impact of Salmonellosis through consuming the meat and eggs of the carrier’s birds and to see the prevalence of Salmonella enteritidis and Salmonella typhimurium from poultry feed, poultry meat, and poultry eggs and their role in the chain of transmission of salmonellae to human beings and causing food poisoning. The ultimate objective was to generate data to improve the quality of poultry products and human health awareness. Salmonellosis is one of the most wide spread food borne zoonoses in all the continents of the world. The etiological agents Salmonella enteritidis and Salmonella typhimurium not only produce the disease but during the convalescent phase (after the recovery of disease) remain carriers for indefinite period of time. The carrier state was not only the source of spread of disease with in the poultry but also caused typhoid fever in humans. The chain of transmission started from poultry feed to poultry meat and ultimately to humans as dead end hosts. In this experiment a total number of 200 samples of human stool and blood were collected randomly (100 samples of human stool and 100 samples of human blood) of 100 patients suspected from food poisoning patients from different hospitals of Lahore area for the identification of Salmonella enteritidis and Salmonella typhimurium through PCR method in order to see the public health impact of Salmonellosis through consuming the meat and eggs of the carrier birds. On the average 14 and 10 stool samples were found positive against Salmonella enteritidis and Salmonella typhimurium from each of the 25 patients from each hospital respectively in case of suspected food poisoning patients. Similarly on an average 5% and 6% blood samples were found positive from 25 patients of each hospital respectively. There was a significant difference (P< 0.05) in the sero positivity of stool and blood samples of suspected food poisoning patients as far as Salmonella enteritidis and Salmonella typhimurium was concerned. However there was no significant difference (P<0.05) between the hospitals.

Keywords: salmonella, zoonosis, food, transmission, eggs

Procedia PDF Downloads 658
1195 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle

Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores

Abstract:

This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.

Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino

Procedia PDF Downloads 168
1194 Policy Recommendations for Reducing CO2 Emissions in Kenya's Electricity Generation, 2015-2030

Authors: Paul Kipchumba

Abstract:

Kenya is an East African Country lying at the Equator. It had a population of 46 million in 2015 with an annual growth rate of 2.7%, making a population of at least 65 million in 2030. Kenya’s GDP in 2015 was about 63 billion USD with per capita GDP of about 1400 USD. The rural population is 74%, whereas urban population is 26%. Kenya grapples with not only access to energy but also with energy security. There is direct correlation between economic growth, population growth, and energy consumption. Kenya’s energy composition is at least 74.5% from renewable energy with hydro power and geothermal forming the bulk of it; 68% from wood fuel; 22% from petroleum; 9% from electricity; and 1% from coal and other sources. Wood fuel is used by majority of rural and poor urban population. Electricity is mostly used for lighting. As of March 2015 Kenya had installed electricity capacity of 2295 MW, making a per capital electricity consumption of 0.0499 KW. The overall retail cost of electricity in 2015 was 0.009915 USD/ KWh (KES 19.85/ KWh), for installed capacity over 10MW. The actual demand for electricity in 2015 was 3400 MW and the projected demand in 2030 is 18000 MW. Kenya is working on vision 2030 that aims at making it a prosperous middle income economy and targets 23 GW of generated electricity. However, cost and non-cost factors affect generation and consumption of electricity in Kenya. Kenya does not care more about CO2 emissions than on economic growth. Carbon emissions are most likely to be paid by future costs of carbon emissions and penalties imposed on local generating companies by sheer disregard of international law on C02 emissions and climate change. The study methodology was a simulated application of carbon tax on all carbon emitting sources of electricity generation. It should cost only USD 30/tCO2 tax on all emitting sources of electricity generation to have solar as the only source of electricity generation in Kenya. The country has the best evenly distributed global horizontal irradiation. Solar potential after accounting for technology efficiencies such as 14-16% for solar PV and 15-22% for solar thermal is 143.94 GW. Therefore, the paper recommends adoption of solar power for generating all electricity in Kenya in order to attain zero carbon electricity generation in the country.

Keywords: co2 emissions, cost factors, electricity generation, non-cost factors

Procedia PDF Downloads 354
1193 Evaluation of Paper Effluent with Two Bacterial Strain and Their Consortia

Authors: Priya Tomar, Pallavi Mittal

Abstract:

As industrialization is inevitable and progress with rapid acceleration, the need for innovative ways to get rid of waste has increased. Recent advancement in bioresource technology paves novel ideas for recycling of factory waste that has been polluting the agro-industry, soil and water bodies. Paper industries in India are in a considerable number, where molasses and impure alcohol are still being used as raw materials for manufacturing of paper. Paper mills based on nonconventional agro residues are being encouraged due to increased demand of paper and acute shortage of forest-based raw materials. The colouring body present in the wastewater from pulp and paper mill is organic in nature and is comprised of wood extractives, tannin, resins, synthetic dyes, lignin and its degradation products formed by the action of chlorine on lignin which imparts an offensive colour to the water. These mills use different chemical process for paper manufacturing due to which lignified chemicals are released into the environment. Therefore, the chemical oxygen demand (COD) of the emanating stream is quite high. This paper presents some new techniques that were developed for the efficiency of bioremediation on paper industry. A short introduction to paper industry and a variety of presently available methods of bioremediation on paper industry and different strategies are also discussed here. For solving the above problem, two bacterial strains (Pseudomonas aeruginosa and Bacillus subtilis) and their consortia (Pseudomonas aeruginosa and Bacillus subtilis) were utilized for the pulp and paper mill effluent. Pseudomonas aeruginosa and Bacillus subtilis named as T–1, T–2, T–3, T–4, T–5, T–6, for the decolourisation of paper industry effluent. The results indicated that a maximum colour reduction is (60.5%) achieved by Pseudomonas aeruginosa and COD reduction is (88.8%) achieved by Bacillus subtilis, maximum pH changes is (4.23) achieved by Pseudomonas aeruginosa, TSS reduction is (2.09 %) achieved by Bacillus subtilis, and TDS reduction is (0.95 %) achieved by Bacillus subtilis. When the wastewater was supplemented with carbon (glucose) and nitrogen (yeast extract) source and data revealed the efficiency of Bacillus subtilis, having more with glucose than Pseudomonas aeruginosa.

Keywords: bioremediation, paper and pulp mill effluent, treated effluent, lignin

Procedia PDF Downloads 244
1192 Modeling of Void Formation in 3D Woven Fabric During Resin Transfer Moulding

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Jan Kočí, Andy Long

Abstract:

Resin transfer molding (RTM) is increasingly used for manufacturing high-quality composite structures due to its additional advantages over prepregs of low-cost out-of-autoclave processing. However, to retain the advantages, it is critical to reduce the void content during the injection. Reinforcements commonly used in RTM, such as woven fabrics, have dual-scale porosity with mesoscale pores between the yarns and the micro-scale pores within the yarns. Due to the fabric geometry and the nature of the dual-scale flow, the flow front during injection creates a complicated fingering formation which leads to void formation. Analytical modeling of void formation for woven fabrics has been widely studied elsewhere. However, there is scope for improvement to the reduction in void formation in 3D fabrics wherein the in-plane yarn layers are confined by additional through-thickness binder yarns. In the present study, the structural morphology of the tortuous pore spaces in the 3D fabric has been studied and implemented using open-source software TexGen. An analytical model for the void and the fingering formation has been implemented based on an idealized unit cell model of the 3D fabric. Since the pore spaces between the yarns are free domains, the region is treated as flow-through connected channels, whereas intra-yarn flow has been modeled using Darcy’s law with an additional term to account for capillary pressure. Later the void fraction has been characterised using the criterion of void formation by comparing the fill time for inter and intra yarn flow. Moreover, the dual-scale two-phase flow of resin with air has been simulated in the commercial CFD solver OpenFOAM/ANSYS to predict the probable location of voids and validate the analytical model. The use of an idealised unit cell model will give the insight to optimise the mesoscale geometry of the reinforcement and injection parameters to minimise the void content during the LCM process.

Keywords: 3D fiber, void formation, RTM, process modelling

Procedia PDF Downloads 91
1191 Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation

Authors: Joao Victor Borges Dos Santos, Thomas Richard, Yevhen Kovalyshen

Abstract:

Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface.

Keywords: optimum drilling state, experimental investigation, field experiments, laboratory experiments, down-the-hole percussive drilling

Procedia PDF Downloads 83
1190 Urban Spatial Metamorphoses: The Case of Kazan City With Using GIS-Technologies

Authors: Irna Malganova

Abstract:

The paper assessed the effectiveness of the use of urban functional zoning using the method of M.A. Kramer by the example of Kazan city (Republic of Tatarstan, Russian Federation) using geoinformation technologies. On the basis of the data obtained, the calculations were carried out to obtain data on population density, overcoming geographic determinism, as well as the effectiveness of the formation of urban frameworks. The authors proposed recommendations for the effectiveness of municipal frameworks in the period from 2018 to 2021: economic, social, environmental and social. The study of effective territorial planning in a given period allows to display of the dynamics of planning changes, as well as assessment of changes in the formation of urban frameworks. Based on the incoming data obtained from the master plan of the municipal formation of Kazan, in the period from 2018 to 2021, there was an increase in population by 13841 people or 1.1% of the values of 2018. In addition, the area of Kazan increased by 2419.6 hectares. In the structure of the distribution of areas of functional zones, there was an increase in such zones of the municipality as zones of residential and public purpose. Changes in functional zoning, as well as territories requiring reorganization, are presented using geoinformation technologies in open-source software Quantum Geographic Information System (QGIS 3.32). According to the calculations based on the method of functional zoning efficiency by M.A. Kreimer, the territorial-planning structure of Kazan City is quite effective. However, in the development of spatial planning concepts, it is possible to emphasize the weakened interest of the population in the development of territorial planning documents. Thus, the approach to spatial planning of Kazan differs from foreign methods and approaches based on the joint development of planning directions and development of territories of municipalities between the developers of the planning structure, business representatives and the population. The population plays the role of the target audience on which territorial planning is oriented. It follows that there is a need to satisfy the opinions and demands of the population.

Keywords: spatial development, metamorphosis, Kazan city, spatial planning, efficiency, geographic determinism., GIS, QGIS

Procedia PDF Downloads 75
1189 Physical and Chemical Alternative Methods of Fresh Produce Disinfection

Authors: Tuji Jemal Ahmed

Abstract:

Fresh produce is an essential component of a healthy diet. However, it can also be a potential source of pathogenic microorganisms that can cause foodborne illnesses. Traditional disinfection methods, such as washing with water and chlorine, have limitations and may not effectively remove or inactivate all microorganisms. This has led to the development of alternative/new methods of fresh produce disinfection, including physical and chemical methods. In this paper, we explore the physical and chemical new methods of fresh produce disinfection, their advantages and disadvantages, and their suitability for different types of produce. Physical methods of disinfection, such as ultraviolet (UV) radiation and high-pressure processing (HPP), are crucial in ensuring the microbiological safety of fresh produce. UV radiation uses short-wavelength UV-C light to damage the DNA and RNA of microorganisms, and HPP applies high levels of pressure to fresh produce to reduce the microbial load. These physical methods are highly effective in killing a wide range of microorganisms, including bacteria, viruses, and fungi. However, they may not penetrate deep enough into the product to kill all microorganisms and can alter the sensory characteristics of the product. Chemical methods of disinfection, such as acidic electrolyzed water (AEW), ozone, and peroxyacetic acid (PAA), are also important in ensuring the microbiological safety of fresh produce. AEW uses a low concentration of hypochlorous acid and a high concentration of hydrogen ions to inactivate microorganisms, ozone uses ozone gas to damage the cell membranes and DNA of microorganisms, and PAA uses a combination of hydrogen peroxide and acetic acid to inactivate microorganisms. These chemical methods are highly effective in killing a wide range of microorganisms, but they may cause discoloration or changes in the texture and flavor of some products and may require specialized equipment and trained personnel to produce and apply. In conclusion, the selection of the most suitable method of fresh produce disinfection should take into consideration the type of product, the level of microbial contamination, the effectiveness of the method in reducing the microbial load, and any potential negative impacts on the sensory characteristics, nutritional composition, and safety of the produce.

Keywords: fresh produce, pathogenic microorganisms, foodborne illnesses, disinfection methods

Procedia PDF Downloads 66
1188 Sound Quality Analysis of Sloshing Noise from a Rectangular Tank

Authors: Siva Teja Golla, B. Venkatesham

Abstract:

The recent technologies in hybrid and high-end cars have subsided the noise from major sources like engines and transmission systems. This resulted in the unmasking of the previously subdued noises. These noises are becoming noticeable to the passengers, causing annoyance to them and affecting the perceived quality of the vehicle. Sloshing in the fuel tank is one such source of noise. Sloshing occurs due to the excitations undergone by the fuel tank due to the vehicle's movement. Sloshing noise occurs due to the interaction of the fluid with the surrounding tank walls or with the fluid itself. The noise resulting from the interaction of the fluid with the structure is ‘Hit noise’, and the noise due to fluid-fluid interaction is ‘Splash noise’. The type of interactions the fluid undergoes inside the tank, and the type of noise generated depends on a variety of factors like the fill level of the tank, type of fluid, presence of objects like baffles inside the tank, type and strength of the excitation, etc. There have been studies done to understand the effect of each of these parameters on the generation of different types of sloshing noises. But little work is done in the psychoacoustic aspect of these sounds. The psychoacoustic study of the sloshing noises gives an understanding of the level of annoyance it can cause to the passengers and helps in taking necessary measures to address it. In view of this, the current paper focuses on the calculation of the psychoacoustic parameters like loudness, sharpness, roughness and fluctuation strength for the sloshing noise. As the noise generation mechanisms for the hit and splash noises are different, these parameters are calculated separately for them. For this, the fluid flow regimes that predominantly cause the hit-and-splash noises are to be separately emulated inside the tank. This is done through a reciprocating test rig, which imposes reciprocating excitation to a rectangular tank filled with the fluid. By varying the frequency of excitation, the fluid flow regimes with the predominant generation of hit-and-splash noises can be separately created inside the tank. These tests are done in a quiet room and the noise generated is captured using microphones and is used for the calculation of psychoacoustic parameters of the sloshing noise. This study also includes the effect of fill level and the presence of baffles inside the tank on these parameters.

Keywords: sloshing, hit noise, splash noise, sound quality

Procedia PDF Downloads 14
1187 Environmental, Social and Corporate Governance Reporting With Regard to Best Practices of Companies Listed on the Warsaw Stock Exchange - Selected Problems

Authors: Katarzyna Olejko

Abstract:

The need to redefine the goals and adapt the operational activities carried out in accordance with the concept of sustainable management to these goals results in the increasing importance of information on the company's activities perceived from the perspective of the effectiveness and efficiency of environmental goals implementation. The narrow scope of reporting data on a company's impact on the environment is not adequate to meet the information needs of modern investors. Reporting obligations are therefore imposed on companies in order to increase the effectiveness of corporate governance and to improve the process of assessing the achievement of environmental goals. The non-financial reporting obligations introduced in Polish legislation increased the scope of reported information. However, the lack of detailed guidelines on the method of reporting resulted in a large diversification of the scope of non-financial information, making it impossible to compare the data presented by companies. The source of information regarding the level of the implementation of standards in Environmental, social and corporate governance (ESG) is the report on compliance with best practices published by the Warsaw Stock Exchange. The document Best Practices of Warsaw Stock Exchange (WSE) Listed Companies (2021), amended by the WSE in 2021, includes the rules applicable to this area (ESG). The aim of this article is to present the level of compliance with good practices in the area of ESG by selected companies listed on the Warsaw Stock Exchange The research carried out as part of this study, which was based on information from reports on the compliance with good practices of companies listed on the Warsaw Stock Exchange that was made available in the good practice scanner, have revealed that good practices in the ESG area are implemented by companies to a limited extent. The level of their application in comparison with other rules is definitely lower. The lack of experience and clear guidelines on ESG reporting may cause some confusion, which is why conscious investors and reporting companies themselves are pinning their hopes on the Corporate Sustainability Reporting Directive (CSRD) adopted by European Parliament.

Keywords: reporting, ESG, corporate governance, best practices

Procedia PDF Downloads 67
1186 A Review of Critical Framework Assessment Matrices for Data Analysis on Overheating in Buildings Impact

Authors: Martin Adlington, Boris Ceranic, Sally Shazhad

Abstract:

In an effort to reduce carbon emissions, changes in UK regulations, such as Part L Conservation of heat and power, dictates improved thermal insulation and enhanced air tightness. These changes were a direct response to the UK Government being fully committed to achieving its carbon targets under the Climate Change Act 2008. The goal is to reduce emissions by at least 80% by 2050. Factors such as climate change are likely to exacerbate the problem of overheating, as this phenomenon expects to increase the frequency of extreme heat events exemplified by stagnant air masses and successive high minimum overnight temperatures. However, climate change is not the only concern relevant to overheating, as research signifies, location, design, and occupation; construction type and layout can also play a part. Because of this growing problem, research shows the possibility of health effects on occupants of buildings could be an issue. Increases in temperature can perhaps have a direct impact on the human body’s ability to retain thermoregulation and therefore the effects of heat-related illnesses such as heat stroke, heat exhaustion, heat syncope and even death can be imminent. This review paper presents a comprehensive evaluation of the current literature on the causes and health effects of overheating in buildings and has examined the differing applied assessment approaches used to measure the concept. Firstly, an overview of the topic was presented followed by an examination of overheating research work from the last decade. These papers form the body of the article and are grouped into a framework matrix summarizing the source material identifying the differing methods of analysis of overheating. Cross case evaluation has identified systematic relationships between different variables within the matrix. Key areas focused on include, building types and country, occupants behavior, health effects, simulation tools, computational methods.

Keywords: overheating, climate change, thermal comfort, health

Procedia PDF Downloads 345
1185 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube

Authors: Nirjhar Dhang, S. Vinay Kumar

Abstract:

Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.

Keywords: concrete, image processing, plane strain, interfacial transition zone

Procedia PDF Downloads 237
1184 Photovoltaic Solar Energy in Public Buildings: A Showcase for Society

Authors: Eliane Ferreira da Silva

Abstract:

This paper aims to mobilize and sensitize public administration leaders to good practices and encourage investment in the PV system in Brazil. It presents a case study methodology for dimensioning the PV system in the roofs of the public buildings of the Esplanade of the Ministries, Brasilia, capital of the country, with predefined resources, starting with the Sustainable Esplanade Project (SEP), of the exponential growth of photovoltaic solar energy in the world and making a comparison with the solar power plant of the Ministry of Mines and Energy (MME), active since: 6/10/2016. In order to do so, it was necessary to evaluate the energy efficiency of the buildings in the period from January 2016 to April 2017, (16 months) identifying the opportunities to reduce electric energy expenses, through the adjustment of contracted demand, the tariff framework and correction of existing active energy. The instrument used to collect data on electric bills was the e-SIC citizen information system. The study considered in addition to the technical and operational aspects, the historical, cultural, architectural and climatic aspects, involved by several actors. Identifying the reductions of expenses, the study directed to the following aspects: Case 1) economic feasibility for exchanges of common lamps, for LED lamps, and, Case 2) economic feasibility for the implementation of photovoltaic solar system connected to the grid. For the case 2, PV*SOL Premium Software was used to simulate several possibilities of photovoltaic panels, analyzing the best performance, according to local characteristics, such as solar orientation, latitude, annual average solar radiation. A simulation of an ideal photovoltaic solar system was made, with due calculations of its yield, to provide a compensation of the energy expenditure of the building - or part of it - through the use of the alternative source in question. The study develops a methodology for public administration, as a major consumer of electricity, to act in a responsible, fiscalizing and incentive way in reducing energy waste, and consequently reducing greenhouse gases.

Keywords: energy efficiency, esplanade of ministries, photovoltaic solar energy, public buildings, sustainable building

Procedia PDF Downloads 125
1183 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran

Authors: Fatemeh Faramarzi, Hosein Mahjoob

Abstract:

Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.

Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6

Procedia PDF Downloads 309
1182 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data

Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder

Abstract:

Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.

Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods

Procedia PDF Downloads 250
1181 Bioethanol Production from Marine Algae Ulva Lactuca and Sargassum Swartzii: Saccharification and Process Optimization

Authors: M. Jerold, V. Sivasubramanian, A. George, B.S. Ashik, S. S. Kumar

Abstract:

Bioethanol is a sustainable biofuel that can be used alternative to fossil fuels. Today, third generation (3G) biofuel is gaining more attention than first and second-generation biofuel. The more lignin content in the lignocellulosic biomass is the major drawback of second generation biofuels. Algae are the renewable feedstock used in the third generation biofuel production. Algae contain a large number of carbohydrates, therefore it can be used for the fermentation by hydrolysis process. There are two groups of Algae, such as micro and macroalgae. In the present investigation, Macroalgae was chosen as raw material for the production of bioethanol. Two marine algae viz. Ulva Lactuca and Sargassum swartzii were used for the experimental studies. The algal biomass was characterized using various analytical techniques like Elemental Analysis, Scanning Electron Microscopy Analysis and Fourier Transform Infrared Spectroscopy to understand the physio-Chemical characteristics. The batch experiment was done to study the hydrolysis and operation parameters such as pH, agitation, fermentation time, inoculum size. The saccharification was done with acid and alkali treatment. The experimental results showed that NaOH treatment was shown to enhance the bioethanol. From the hydrolysis study, it was found that 0.5 M Alkali treatment would serve as optimum concentration for the saccharification of polysaccharide sugar to monomeric sugar. The maximum yield of bioethanol was attained at a fermentation time of 9 days. The inoculum volume of 1mL was found to be lowest for the ethanol fermentation. The agitation studies show that the fermentation was higher during the process. The percentage yield of bioethanol was found to be 22.752% and 14.23 %. The elemental analysis showed that S. swartzii contains a higher carbon source. The results confirmed hydrolysis was not completed to recover the sugar from biomass. The specific gravity of ethanol was found to 0.8047 and 0.808 for Ulva Lactuca and Sargassum swartzii, respectively. The purity of bioethanol also studied and found to be 92.55 %. Therefore, marine algae can be used as a most promising renewable feedstock for the production of bioethanol.

Keywords: algae, biomass, bioethaol, biofuel, pretreatment

Procedia PDF Downloads 154
1180 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 669
1179 The Effectiveness of Energy-related Tax in Curbing Transport-related Carbon Emissions: The Role of Green Finance and Technology in OECD Economies

Authors: Hassan Taimoor, Piotr Krajewski, Piotr Gabrielzcak

Abstract:

Being responsible for the largest source of energy-related emissions, the transportation sector is driven by more than half of global oil demand and total energy consumption, making it a crucial factor in tackling climate change and environmental degradation. The present study empirically tests the effectives of the energy-related tax (TXEN) in curbing transport-related carbon emissions (CO2TRANSP) in Organization for Economic Cooperation and Development (OECD) economies over the period of 1990-2020. Moreover, Green Finance (GF), Technology (TECH), and Gross domestic product (GDP) have also been added as explanatory factors which might affect CO2TRANSP emissions. The study employs the Method of Moment Quantile Regression (MMQR), an advance econometric technique to observe the variations along each quantile. Based on the results of the preliminary test, we confirm the presence of cross-sectional dependence and slope heterogeneity. Whereas the result of the panel unit root test report mixed order of variables’ integration. The findings reveal that rise in income level activates CO2TRANSP, confirming the first stage of Environmental Kuznet Hypothesis. Surprisingly, the present TXEN policies of OECD member states are not mature enough to tackle the CO2TRANSP emissions. However, the findings confirm that GF and TECH are solely responsible for the reduction in the CO2TRANSP. The outcomes of Bootstrap Quantile Regression (BSQR) further validate and support the earlier findings of MMQR. Based on the findings of this study, it is revealed that the current TXEN policies are too moderate, and an incremental and progressive rise in TXEN may help in a transition toward a cleaner and sustainable transportation sector in the study region.

Keywords: transport-related CO2 emissions, energy-related tax, green finance, technological development, oecd member states

Procedia PDF Downloads 70
1178 Wetting Features of Butterflies Morpho Peleides and Anti-icing Behavior

Authors: Burdin Louise, Brulez Anne-Catherine, Mazurcyk Radoslaw, Leclercq Jean-louis, Benayoun Stéphane

Abstract:

By using a biomimetic approach, an investigation was conducted to determine the connections between morphology and wetting. The interest is focused on the Morpho peleides butterfly. This butterfly is already well-known among researchers for its brilliant iridescent color and has inspired numerous innovations. The intricate structure of its wings is responsible for such color. However, this multiscale structure exhibits a multitude of other features, such as hydrophobicity. Given the limited research on the wetting properties of Morpho butterfly, a detailed analysis of its wetting behavior is proposed. Multiscale surface topographies of the Morpho peleides butterfly were analyzed using scanning electron microscope and atomic force microscopy. To understand the relationship between morphology and wettability, a goniometer was employed to measured static and dynamic contact angle. Since several studies have consistently demonstrated that superhydrophobic surfaces can effectively delay freezing, icing delay time the Morpho’s wings was also measured. The results revealed contact angles close to 136°, indicating a high degree of hydrophobicity. Moreover, sliding angles (SA) were measured in different directions, including along and against the rolling-outward direction. The findings suggest anisotropic wetting. Specifically, when the wing was tilted along the rolling outward direction (i.e., away from the insect’s body) SA was about 7°. While, when the wing was tilted against the rolling outward direction, SA was about 29°. This phenomenon is directly linked to the butterfly’s survival strategy. To investigate the exclusive morphological impact on anti-icing properties, PDMS replicas of the Morpho butterfly were obtained. When compared to flat PDMS and microscale textured PDMS, Morpho replications exhibited a longer freezing time. Therefore, this could be a source of inspiration for designing superhydrophobic surfaces with anti-icing applications or functional surfaces with controlled wettability.

Keywords: biomimetic, anisotropic wetting, anti-icing, multiscale roughness

Procedia PDF Downloads 50
1177 Microbial Fuel Cells and Their Applications in Electricity Generating and Wastewater Treatment

Authors: Shima Fasahat

Abstract:

This research is an experimental research which was done about microbial fuel cells in order to study them for electricity generating and wastewater treatment. These days, it is very important to find new, clean and sustainable ways for energy supplying. Because of this reason there are many researchers around the world who are studying about new and sustainable energies. There are different ways to produce these kind of energies like: solar cells, wind turbines, geothermal energy, fuel cells and many other ways. Fuel cells have different types one of these types is microbial fuel cell. In this research, an MFC was built in order to study how it can be used for electricity generating and wastewater treatment. The microbial fuel cell which was used in this research is a reactor that has two tanks with a catalyst solution. The chemical reaction in microbial fuel cells is a redox reaction. The microbial fuel cell in this research is a two chamber MFC. Anode chamber is an anaerobic one (ABR reactor) and the other chamber is a cathode chamber. Anode chamber consists of stabilized sludge which is the source of microorganisms that do redox reaction. The main microorganisms here are: Propionibacterium and Clostridium. The electrodes of anode chamber are graphite pages. Cathode chamber consists of graphite page electrodes and catalysts like: O2, KMnO4 and C6N6FeK4. The membrane which separates the chambers is Nafion117. The reason of choosing this membrane is explained in the complete paper. The main goal of this research is to generate electricity and treating wastewater. It was found that when you use electron receptor compounds like: O2, MnO4, C6N6FeK4 the velocity of electron receiving speeds up and in a less time more current will be achieved. It was found that the best compounds for this purpose are compounds which have iron in their chemical formula. It is also important to pay attention to the amount of nutrients which enters to bacteria chamber. By adding extra nutrients in some cases the result will be reverse.  By using ABR the amount of chemical oxidation demand reduces per day till it arrives to a stable amount.

Keywords: anaerobic baffled reactor, bioenergy, electrode, energy efficient, microbial fuel cell, renewable chemicals, sustainable

Procedia PDF Downloads 222
1176 Degradation of Commercial Polychlorinated Biphenyl Mixture by Naturally Occurring Facultative Microorganisms via Anaerobic Dechlorination and Aerobic Oxidation

Authors: P. M. G. Pathiraja, P. Egodawatta, A. Goonetilleke, V. S. J. Te'o

Abstract:

The production and use of Polychlorinated biphenyls (PCBs), a group of synthetic halogenated hydrocarbons have been restricted worldwide due to its toxicity and categorized as one of the twelve priority persistent organic pollutants (POP) by the Stockholm Convention. Low reactivity and high chemical stability of PCBs have made them highly persistent in the environment and bio-concentration and bio-magnification along the food chain contribute to multiple health impacts in humans and animals. Remediating environments contaminated with PCBs is a challenging task for decades. Use of microorganisms for remediation of PCB contaminated soils and sediments have been widely investigated due to the potential of breakdown these complex contaminants with minimum environmental impacts. To achieve an effective bioremediation of polychlorinated biphenyls (PCBs) contaminated environments, microbes were sourced from environmental samples and tested for their ability to hydrolyze PCBs under different conditions. Comparison of PCB degradation efficiencies of four naturally occurring facultative bacterial cultures isolated through selective enrichment under aerobic and anaerobic conditions were simultaneously investigated in minimal salt medium using 50 mg/L Aroclor 1260, a commonly used commercial PCB mixture as the sole source of carbon. The results of a six-week study demonstrated that all the tested facultative Achromobacter, Ochrobactrum, Lysinibacillus and Pseudomonas strains are capable of degrading PCBs under both anaerobic and aerobic conditions while assisting hydrophobic PCBs to make solubilize in the aqueous minimal medium. Overall, the results suggest that some facultative bacteria are capable of effective in degrading PCBs under anaerobic conditions through reductive dechlorination and under aerobic conditions through oxidation. Therefore, use of suitable facultative microorganisms under combined anaerobic-aerobic conditions and combination of such strains capable of solubilization and breakdown of PCBs has high potential in achieving higher PCB removal rates.

Keywords: bioremediation, combined anaerobic-aerobic degradation, facultative microorganisms, polychlorinated biphenyls

Procedia PDF Downloads 235