Search results for: super air meter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 747

Search results for: super air meter

57 The Relationship Between Military Expenditure and International Trade: A Selection of African Countries

Authors: Andre C Jordaan

Abstract:

The end of the Cold War and rivalry between super powers has changed the nature of military build-up in many countries. A call from international institutions like the United Nations, International Monetary Fund and the World Bank to reduce the levels of military expenditure was the order of the day. However, this bid to cut military expenditure has not been forthright. Recently, active armed conflicts occurred in at least 46 states in 2021 with 8 in the Americas, 9 in Asia and Oceania, 3 in Europe, 8 in the Middle East and North Africa and 18 in sub-Saharan Africa. Global military expenditure in 2022 was estimated to be US$2,2 trillion, representing 2.2 per cent of global gross domestic product. Particularly sharp rises in military spending have followed in African countries and the Middle East. Global military expenditure currently follows two divergent trends, either a declining trend in the West caused mainly by austerity, efforts to control budget deficits and the wrapping up of prolonged wars. However, some parts of the world shows an increasing trend on the back of security concerns, geopolitical ambitions and some internal political factors. Conflict related fatalities in sub-Saharan Africa alone increased by 19 per cent between 2020 and 2021. The interaction between military expenditure (read conflict) and international trade is generally the cause of much debate. Some argue that countries’ fear of losing trade opportunities causes political decision makers to refrain from engaging in conflict when important trading partners are involved. However, three main arguments are always present when discussing the relationship between military expenditure or conflicts and international trade: Free trade could promote peaceful cooperation, it could trigger tension between trading blocs and partners, and trade could have no effect because conflict is based on issues that are more important. Military expenditure remains an important element of the overall government expenditure in many African countries. On the other hand, numerous researchers perceive increased international trade to be one of the main factors promoting economic growth in these countries. The purpose of this paper is therefore to determine what effect, if any, exist between the level of military expenditure and international trade within a selection of 19 African countries. Applying an augmented gravity model to explore the relationship between military expenditure and international trade, evidence is found to confirm the existence of an inverse relationship between these two variables. It seems that the results are in line with the Liberal school of thought where trade is seen as an instrument of conflict prevention. Trade is therefore perceived as a symptom of peace and not a cause thereof. In general, conflict or rumors of conflict tend to reduce trade. If conflict did not impede trade, economic agents would be indifferent to risk. Many claim that trade brings peace, however, it seems that it is rather peace that brings trade. From the results, it appears that trade reduces the risk of conflict and that conflict reduces trade.

Keywords: African countries, conflict, international trade, military expenditure

Procedia PDF Downloads 44
56 Translating Creativity to an Educational Context: A Method to Augment the Professional Training of Newly Qualified Secondary School Teachers

Authors: Julianne Mullen-Williams

Abstract:

This paper will provide an overview of a three year mixed methods research project that explores if methods from the supervision of dramatherapy can augment the occupational psychology of newly qualified secondary school teachers. It will consider how creativity and the use of metaphor, as applied in the supervision of dramatherapists, can be translated to an educational context in order to explore the explicit / implicit dynamics between the teacher trainee/ newly qualified teacher and the organisation in order to support the super objective in training for teaching; how to ‘be a teacher.’ There is growing evidence that attrition rates among teachers are rising after only five years of service owing to too many national initiatives, an unmanageable curriculum and deteriorating student discipline. The fieldwork conducted entailed facilitating a reflective space for Newly Qualified Teachers from all subject areas, using methods from the supervision of dramatherapy, to explore the social and emotional aspects of teaching and learning with the ultimate aim of improving the occupational psychology of teachers. Clinical supervision is a formal process of professional support and learning which permits individual practitioners in frontline service jobs; counsellors, psychologists, dramatherapists, social workers and nurses to expand their knowledge and proficiency, take responsibility for their own practice, and improve client protection and safety of care in complex clinical situations. It is deemed integral to continued professional practice to safeguard vulnerable people and to reduce practitioner burnout. Dramatherapy supervision incorporates all of the above but utilises creative methods as a tool to gain insight and a deeper understanding of the situation. Creativity and the use of metaphor enable the supervisee to gain an aerial view of the situation they are exploring. The word metaphor in Greek means to ‘carry across’ indicating a transfer of meaning form one frame of reference to another. The supervision support was incorporated into each group’s induction training programme. The first year group attended fortnightly one hour sessions, the second group received two one hour sessions every term. The existing literature on the supervision and mentoring of secondary school teacher trainees calls for changes in pre-service teacher education and in the induction period. There is a particular emphasis on the need to include reflective and experiential learning, within training programmes and within the induction period, in order to help teachers manage the interpersonal dynamics and emotional impact within a high pressurised environment

Keywords: dramatherapy supervision, newly qualified secondary school teachers, professional development, teacher education

Procedia PDF Downloads 362
55 Exploring the Energy Saving Benefits of Solar Power and Hot Water Systems: A Case Study of a Hospital in Central Taiwan

Authors: Ming-Chan Chung, Wen-Ming Huang, Yi-Chu Liu, Li-Hui Yang, Ming-Jyh Chen

Abstract:

introduction: Hospital buildings require considerable energy, including air conditioning, lighting, elevators, heating, and medical equipment. Energy consumption in hospitals is expected to increase significantly due to innovative equipment and continuous development plans. Consequently, the environment and climate will be adversely affected. Hospitals should therefore consider transforming from their traditional role of saving lives to being at the forefront of global efforts to reduce carbon dioxide emissions. As healthcare providers, it is our responsibility to provide a high-quality environment while using as little energy as possible. Purpose / Methods: Compare the energy-saving benefits of solar photovoltaic systems and solar hot water systems. The proportion of electricity consumption effectively reduced after the installation of solar photovoltaic systems. To comprehensively assess the potential benefits of utilizing solar energy for both photovoltaic (PV) and solar thermal applications in hospitals, a solar PV system was installed covering a total area of 28.95 square meters in 2021. Approval was obtained from the Taiwan Power Company to integrate the system into the hospital's electrical infrastructure for self-use. To measure the performance of the system, a dedicated meter was installed to track monthly power generation, which was then converted into area output using an electric energy conversion factor. This research aims to compare the energy efficiency of solar PV systems and solar thermal systems. Results: Using the conversion formula between electrical and thermal energy, we can compare the energy output of solar heating systems and solar photovoltaic systems. The comparative study draws upon data from February 2021 to February 2023, wherein the solar heating system generated an average of 2.54 kWh of energy per panel per day, while the solar photovoltaic system produced 1.17 kWh of energy per panel per day, resulting in a difference of approximately 2.17 times between the two systems. Conclusions: After conducting statistical analysis and comparisons, it was found that solar thermal heating systems offer higher energy and greater benefits than solar photovoltaic systems. Furthermore, an examination of literature data and simulations of the energy and economic benefits of solar thermal water systems and solar-assisted heat pump systems revealed that solar thermal water systems have higher energy density values, shorter recovery periods, and lower power consumption than solar-assisted heat pump systems. Through monitoring and empirical research in this study, it has been concluded that a heat pump-assisted solar thermal water system represents a relatively superior energy-saving and carbon-reducing solution for medical institutions. Not only can this system help reduce overall electricity consumption and the use of fossil fuels, but it can also provide more effective heating solutions.

Keywords: sustainable development, energy conservation, carbon reduction, renewable energy, heat pump system

Procedia PDF Downloads 58
54 Statistical Analysis to Compare between Smart City and Traditional Housing

Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh

Abstract:

Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and design

Keywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving

Procedia PDF Downloads 93
53 Comparative Analysis of Mechanical Properties of Paddy Rice for Different Variety-Moisture Content Interactions

Authors: Johnson Opoku-Asante, Emmanuel Bobobee, Joseph Akowuah, Eric Amoah Asante

Abstract:

In recent years, the issue of postharvest losses has become a serious concern in Sub-Saharan Africa. Postharvest technology development and adaptation need urgent attention, particularly for small and medium-scale rice farmers in Africa. However, to better develop any postharvest technology, knowledge of the mechanical properties of different varieties of paddy rice is vital. There is also the issue of the development of new rice cultivars. The objectives of this research are to (1) determine the mechanical properties of the selected paddy rice varieties at varying moisture content. (2) conduct a comparative analysis of the mechanical properties of selected rice paddy for different variety-moisture content interactions. (3) determine the significant statistical differences between the mean values of the various variety-moisture content interactions The mechanical properties of AGRA rice, CRI-Amankwatia, CRI-Enapa and CRI-Dartey, four local varieties developed by Crop Research Institute of Ghana are compared at 11.5%, 13.0% and 16.5% dry basis moisture content. The mechanical properties measured are Sphericity, Aspect ratio, Grain mass, 1000 Grain mass, Bulk Density, True Density, Porosity and Angle of Repose. Samples were collected from the Kwadaso Agric College of the CRI in Kumasi. The samples were threshed manually and winnowed before conducting the experiment. The moisture content was determined on a dry basis using the Moistex Screw-Type Digital Grain Moisture Meter. Other equipment used for data collection were venire calipers and Citizen electronic scale. A 4×3 factorial arrangement was used in a completely randomized design in three replications. Tukey's HSD comparisons test was conducted during data analysis to compare all possible pairwise combinations of the various varieties’ moisture content interaction. From the results, it was concluded that Sphericity recorded 0.391 mm³ to 0.377 mm³ for CRI-Dartey at 16.5% and CRI-Enapa at 13.5%, respectively, whereas Aspect Ratio recorded 0.298 mm³ to 0.269 mm³ for CRI-Dartey at 16.5% and CRI-Enapa at 13.5% respectively. For grain mass, AGRA rice at 13.0% also recorded 0.0312 g as the highest score and CRI-Enapa at 13.0% obtained 0.0237 as the lowest score. For the GM1000, it was observed that it ranges from 29.33 g for CRI-Amankwatia at 16.5% moisture content to 22.54 g for CRI-Enapa at 16.5% interactions. Bulk Density ranged from 654.0 kg/m³ to 422.9 kg/m³ for CRI-Amankwatia at 16.5% and CRI-Enapa at 11.5% as the highest and lowest recordings, respectively. It was also observed that the true Density ranges from 1685.8 kg/m3 for AGRA rice at 13.0% moisture content to 1352.5 kg/m³ for CRI-Enapa at 16.5% interactions. In the case of porosity, CRI-Enapa at 11.5% received the highest score of 70.83% and CRI-Amankwatia at 16.5 received the lowest score of 55.88%. Finally, in the case of Angle of Repose, CRI-Amankwatia at 16.5% recorded the highest score of 47.3o and CRI-Enapa at 11.5% recorded the least score of 34.27o. In all cases, the difference in mean value was less than the LSD. This indicates that there were no significant statistical differences between their mean values, indicating that technologies developed and adapted for one variety can equally be used for all the other varieties.

Keywords: angle of repose, aspect ratio, bulk density, porosity, sphericity, mechanical properties

Procedia PDF Downloads 45
52 TRAC: A Software Based New Track Circuit for Traffic Regulation

Authors: Jérôme de Reffye, Marc Antoni

Abstract:

Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.

Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling

Procedia PDF Downloads 311
51 Building Carbon Footprint Comparison between Building Permit, as Built, as Built with Circular Material Usage

Authors: Kadri-Ann Kertsmik, Martin Talvik, Kimmo Lylykangas, Simo Ilomets, Targo Kalamees

Abstract:

This study compares the building carbon footprint (CF) values for a case study of a private house located in a cold climate, using the Level(s) methodology. It provides a framework for measuring the environmental performance of buildings throughout their life cycle, taking into account various factors. The study presents the results of the three scenarios, comparing their carbon emissions and highlighting the benefits of circular material usage. The construction process was thoroughly documented, and all materials and components (including minuscule mechanical fasteners, each meter of cable, a kilogram of mortar, and the component of HVAC systems, among other things) delivered to the construction site were noted. Transportation distances of each delivery, the fuel consumption of construction machines, and electricity consumption for temporary heating and electrical tools were also monitored. Using the detailed data on material and energy resources, the CF was calculated for two scenarios: one where circular material usage was applied and another where virgin materials were used instead of reused ones. The results were compared with the CF calculated based on the building permit design model using the Level(s) methodology. To study the range of possible results in the early stage of CF assessment, the same building permit design was given to several experts. Results showed that embodied carbon values for a built scenario were significantly lower than the values predicted by the building permit stage as a result of more precise material quantities, as the calculation methodology is designed to overestimate the CF. Moreover, designers made an effort to reduce the building's CF by reusing certain materials such as ceramic tiles, lightweight concrete blocks, and timber during the construction process. However, in a cold climate context where operational energy (B6) continues to dominate, the total building CF value changes between the three scenarios were less significant. The calculation for the building permit project was performed by several experts, and CF results were in the same range. It alludes that, for the first estimation of preliminary building CF, using average values proves to be an appropriate method for the Estonian national carbon footprint estimation phase during building permit application. The study also identified several opportunities for reducing the carbon footprint of the building, such as reusing materials from other construction sites, preferring local material producers, and reducing wastage on site. The findings suggest that using circular materials can significantly reduce the carbon footprint of buildings. Overall, the study highlights the importance of using a comprehensive approach to measure the environmental performance of buildings, taking into account both the project and the actually built house. It also emphasises the need for ongoing monitoring for designing the building and construction site waste. The study also gives some examples of how to enable future circularity of building components and materials, e.g., building in layers, using wood as untreated, etc.

Keywords: carbon footprint, circular economy, sustainable construction, level(s) methodology

Procedia PDF Downloads 56
50 Window Opening Behavior in High-Density Housing Development in Subtropical Climate

Authors: Minjung Maing, Sibei Liu

Abstract:

This research discusses the results of a study of window opening behavior of large housing developments in the high-density megacity of Hong Kong. The methods used for the study involved field observations using photo documentation of the four cardinal elevations (north, south-east, and west) of two large housing developments in a very dense urban area of approx. 46,000 persons per square meter within the city of Hong Kong. The targeted housing developments (A and B) are large public housing with a population of about 13,000 in each development of lower income. However, the mean income level in development A is about 40% higher than development B and home ownership is 60% in development A and 0% in development B. Mapping of the surrounding amenities and layout of the developments were also studied to understand the available activities to the residents. The photo documentation of the elevations was taken from November 2016 to February 2018 to gather a full spectrum of different seasons and both in the morning and afternoon (am/pm) times. From the photograph, the window opening behavior was measured by counting the amount of windows opened as a percentage of all the windows on that façade. For each date of survey data collected, weather data was recorded from weather stations located in the same region to collect temperature, humidity and wind speed. To further understand the behavior, simulation studies of microclimate conditions of the housing development was conducted using the software ENVI-met, a widely used simulation tool by researchers studying urban climate. Four major conclusions can be drawn from the data analysis and simulation results. Firstly, there is little change in the amount of window opening during the different seasons within a temperature range of 10 to 35 degrees Celsius. This means that people who tend to open their windows have consistent window opening behavior throughout the year and high tolerance of indoor thermal conditions. Secondly, for all four elevations the lower-income development B opened more windows (almost two times more units) than higher-income development A meaning window opening behavior had strong correlations with income level. Thirdly, there is a lack of correlation between outdoor horizontal wind speed and window opening behavior, as the changes of wind speed do not seem to affect the action of opening windows in most conditions. Similar to the low correlation between horizontal wind speed and window opening percentage, it is found that vertical wind speed also cannot explain the window opening behavior of occupants. Fourthly, there is a slightly higher average of window opening on the south elevation than the north elevation, which may be due to the south elevation being well shaded from high angle sun during the summer and allowing heat into units from lower angle sun during the winter season. These findings are important to providing insight into how to better design urban environments and indoor thermal environments for a liveable high density city.

Keywords: high-density housing, subtropical climate, urban behavior, window opening

Procedia PDF Downloads 106
49 A Study of the Carbon Footprint from a Liquid Silicone Rubber Compounding Facility in Malaysia

Authors: Q. R. Cheah, Y. F. Tan

Abstract:

In modern times, the push for a low carbon footprint entails achieving carbon neutrality as a goal for future generations. One possible step towards carbon footprint reduction is the use of more durable materials with longer lifespans, for example, silicone data cableswhich show at least double the lifespan of similar plastic products. By having greater durability and longer lifespans, silicone data cables can reduce the amount of trash produced as compared to plastics. Furthermore, silicone products don’t produce micro contamination harmful to the ocean. Every year the electronics industry produces an estimated 5 billion data cables for USB type C and lightning data cables for tablets and mobile phone devices. Material usage for outer jacketing is 6 to 12 grams per meter. Tests show that the product lifespan of a silicone data cable over plastic can be doubled due to greater durability. This can save at least 40,000 tonnes of material a year just on the outer jacketing of the data cable. The facility in this study specialises in compounding of liquid silicone rubber (LSR) material for the extrusion process in jacketing for the silicone data cable. This study analyses the carbon emissions from the facility, which is presently capable of producing more than 1,000 tonnes of LSR annually. This study uses guidelines from the World Business Council for Sustainable Development (WBCSD) and World Resources Institute (WRI) to define the boundaries of the scope. The scope of emissions is defined as 1. Emissions from operations owned or controlled by the reporting company, 2. Emissions from the generation of purchased or acquired energy such as electricity, steam, heating, or cooling consumed by the reporting company, and 3. All other indirect emissions occurring in the value chain of the reporting company, including both upstream and downstream emissions. As the study is limited to the compounding facility, the system boundaries definition according to GHG protocol is cradle-to-gate instead of cradle-to-grave exercises. Malaysia’s present electricity generation scenario was also used, where natural gas and coal constitute the bulk of emissions. Calculations show the LSR produced for the silicone data cable with high fire retardant capability has scope 1 emissions of 0.82kg CO2/kg, scope 2 emissions of 0.87kg CO2/kg, and scope 3 emissions of 2.76kg CO2/kg, with a total product carbon footprint of 4.45kg CO2/kg. This total product carbon footprint (Cradle-to-gate) is comparable to the industry and to plastic materials per tonne of material. Although per tonne emission is comparable to plastic material, due to greater durability and longer lifespan, there can be significantly reduced use of LSR material. Suggestions to reduce the calculated product carbon footprint in the scope of emissions involve 1. Incorporating the recycling of factory silicone waste into operations, 2. Using green renewable energy for external electricity sources and 3. Sourcing eco-friendly raw materials with low GHG emissions.

Keywords: carbon footprint, liquid silicone rubber, silicone data cable, Malaysia facility

Procedia PDF Downloads 77
48 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂

Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine

Abstract:

Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).

Keywords: devulcanization, recycling, rubber, waste

Procedia PDF Downloads 355
47 Species Profiling of Scarab Beetles with the Help of Light Trap in Western Himalayan Region of Uttarakhand

Authors: Ajay Kumar Pandey

Abstract:

White grub (Coleoptera: Scarabaeidae), locally known as Kurmula, Pagra, Chinchu, is a major destructive pest in western Himalayan region of Uttarakhand state of India. Various crops like cereals (up land paddy, wheat, and barley), vegetables (capsicum, cabbage, tomato, cauliflower, carrot etc) and some pulse (like pigeon pea, green gram, black gram) are grown with limited availability of primary resources. Among the various limitations in successful cultivation of these crops, white grub has been proved a major constraint in for all crops grown in hilly area. The losses incurred due to white grubs are huge in case of commercial crops like sugarcane, groundnut, potato, maize and upland rice. Moreover, it has been proved major constraint in potato production in mid and higher hills of India. Adults emerge in May-June following the onset of monsoon and thereafter defoliate the apple, apricot, plum, and walnut during night while 2nd and 3rd instar grubs feed on live roots of cultivated as well as non cultivated crops from August to January. Survey was conducted in hilly (Pauri and Tehri) as well as plain area (Haridwar district) of Uttarakhand state. Collection of beetle was done from various locations from August to September of five consecutive years with the help of light trap and directly from host plant. The grub was also collected by excavating one square meter area from different locations and reared in laboratory to find out adult. During the collection, the diseased or dead cadaver were also collected and brought in the laboratory and identified the causal organisms. Total 25 species of white grub was identified out of which Holotrichia longipennis, Anomala dimidiata, Holotrichia lineatopennis, Maladera insanabilis, Brahmina sp. make complex problem in different area of Uttarakhand where they cause severe damage to various crops. During the survey, it was observed that white grubs beetles have variation in preference of host plant, even in choice of fruit and leaves of host plant. It was observed that, a white grub species, which identified as Lepidiota mansueta Burmeister., was causing severe havoc to sugarcane crop grown in major sugarcane growing belt of Haridwar district. The study also revealed that Bacillus cereus, Beauveria bassiana, Metarhizium anisopliae, Steinernema, Heterorhabditis are major disease causing agents in immature stage of white grub under rain-fed condition of Uttarakhand which caused 15.55 to 21.63 percent natural mortality of grubs with an average of 18.91 percent. However, among the microorganisms, B. cereus found to be significantly more efficient (7.03 percent mortality) then the entomopathogenic fungi (3.80 percent mortality) and nematodes (3.20 percent mortality).

Keywords: Lepidiota, profiling, Uttarakhand, whitegrub

Procedia PDF Downloads 203
46 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 49
45 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá

Authors: Dayron Camilo Bermudez Mendoza

Abstract:

Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.

Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility

Procedia PDF Downloads 35
44 Biological Monitoring: Vegetation Cover, Bird Assemblages, Rodents, Terrestrial and Aquatic Invertebrates from a Closed Landfill

Authors: A. Cittadino, P. Gantes, C. Coviella, M. Casset, A. Sanchez Caro

Abstract:

Three currently active landfills receive the waste from Buenos Aires city and the Great Buenos Aires suburbs. One of the first landfills to receive solid waste from this area was located in Villa Dominico, some 7 km south from Buenos Aires City. With an area of some 750 ha, including riparian habitats, divided into 14 cells, it received solid wastes from June 1979 through February 2004. In December 2010, a biological monitoring program was set up by CEAMSE and Universidad Nacional de Lujan, still operational to date. The aim of the monitoring program is to assess the state of several biological groups within the landfill and to follow their dynamics overtime in order to identify if any, early signs of damage the landfill activities might have over the biota present. Bird and rodent populations, aquatic and terrestrial invertebrates’ populations, cells vegetation coverage, and surrounding areas vegetation coverage and main composition are followed by quarterly samplings. Bird species richness and abundance were estimated by observation over walk transects on each environment. A total of 74 different species of birds were identified. Species richness and diversity were high for both riparian surrounding areas and within the landfill. Several grassland -typical of the 'Pampa'- bird species were found within the landfill, as well as some migratory and endangered bird species. Sherman and Tomahawk traps are set overnight for small mammal sampling. Rodent populations are just above detection limits, and the few specimens captured belong mainly to species common to rural areas, instead of city-dwelling species. The two marsupial species present in the region were captured on occasions. Aquatic macroinvertebrates were sampled on a watercourse upstream and downstream the outlet of the landfill’s wastewater treatment plant and are used to follow water quality using biological indices. Water quality ranged between weak and severe pollution; benthic invertebrates sampled before and after the landfill, show no significant differences in water quality using the IBMWP index. Insect biota from yellow sticky cards and pitfall traps showed over 90 different morphospecies, with Shannon diversity index running from 1.9 to 3.9, strongly affected by the season. An easy-to-perform non-expert demandant method was used to assess vegetation coverage. Two scales of determination are utilized: field observation (1 m resolution), and Google Earth images (that allow for a better than 5 m resolution). Over the eight year period of the study, vegetation coverage over the landfill cells run from a low 83% to 100% on different cells, with an average between 95 to 99% for the entire landfill depending on seasonality. Surrounding area vegetation showed almost 100% coverage during the entire period, with an average density from 2 to 6 species per sq meter and no signs of leachate damaged vegetation.

Keywords: biological indicators, biota monitoring, landfill species diversity, waste management

Procedia PDF Downloads 121
43 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 54
42 Variation of Warp and Binder Yarn Tension across the 3D Weaving Process and its Impact on Tow Tensile Strength

Authors: Reuben Newell, Edward Archer, Alistair McIlhagger, Calvin Ralph

Abstract:

Modern industry has developed a need for innovative 3D composite materials due to their attractive material properties. Composite materials are composed of a fibre reinforcement encased in a polymer matrix. The fibre reinforcement consists of warp, weft and binder yarns or tows woven together into a preform. The mechanical performance of composite material is largely controlled by the properties of the preform. As a result, the bulk of recent textile research has been focused on the design of high-strength preform architectures. Studies looking at optimisation of the weaving process have largely been neglected. It has been reported that yarns experience varying levels of damage during weaving, resulting in filament breakage and ultimately compromised composite mechanical performance. The weaving parameters involved in causing this yarn damage are not fully understood. Recent studies indicate that poor yarn tension control may be an influencing factor. As tension is increased, the yarn-to-yarn and yarn-to-weaving-equipment interactions are heightened, maximising damage. The correlation between yarn tension variation and weaving damage severity has never been adequately researched or quantified. A novel study is needed which accesses the influence of tension variation on the mechanical properties of woven yarns. This study has looked to quantify the variation of yarn tension throughout weaving and sought to link the impact of tension to weaving damage. Multiple yarns were randomly selected, and their tension was measured across the creel and shedding stages of weaving, using a hand-held tension meter. Sections of the same yarn were subsequently cut from the loom machine and tensile tested. A comparison study was made between the tensile strength of pristine and tensioned yarns to determine the induced weaving damage. Yarns from bobbins at the rear of the creel were under the least amount of tension (0.5-2.0N) compared to yarns positioned at the front of the creel (1.5-3.5N). This increase in tension has been linked to the sharp turn in the yarn path between bobbins at the front of the creel and creel I-board. Creel yarns under the lower tension suffered a 3% loss of tensile strength, compared to 7% for the greater tensioned yarns. During shedding, the tension on the yarns was higher than in the creel. The upper shed yarns were exposed to a decreased tension (3.0-4.5N) compared to the lower shed yarns (4.0-5.5N). Shed yarns under the lower tension suffered a 10% loss of tensile strength, compared to 14% for the greater tensioned yarns. Interestingly, the most severely damaged yarn was exposed to both the largest creel and shedding tensions. This study confirms for the first time that yarns under a greater level of tension suffer an increased amount of weaving damage. Significant variation of yarn tension has been identified across the creel and shedding stages of weaving. This leads to a variance of mechanical properties across the woven preform and ultimately the final composite part. The outcome from this study highlights the need for optimised yarn tension control during preform manufacture to minimize yarn-induced weaving damage.

Keywords: optimisation of preform manufacture, tensile testing of damaged tows, variation of yarn weaving tension, weaving damage

Procedia PDF Downloads 209
41 The Role of Two Macrophyte Species in Mineral Nutrient Cycling in Human-Impacted Water Reservoirs

Authors: Ludmila Polechonska, Agnieszka Klink

Abstract:

The biogeochemical studies of macrophytes shed light on elements bioavailability, transfer through the food webs and their possible effects on the biota, and provide a basis for their practical application in aquatic monitoring and remediation. Measuring the accumulation of elements in plants can provide time-integrated information about the presence of chemicals in aquatic ecosystems. The aim of the study was to determine and compare the contents of micro- and macroelements in two cosmopolitan macrophytes, submerged Ceratophyllum demersum (hornworth) and free-floating Hydrocharis morsus-ranae (European frog-bit), in order to assess their bioaccumulation potential, elements stock accumulated in each plant and their role in nutrients cycling in small water reservoirs. Sampling sites were designated in 25 oxbow lakes in urban areas in Lower Silesia (SW Poland). In each sampling site, fresh whole plants of C. demersum and H. morsus-ranae were collected from squares of 1x1 meters each where the species coexisted. European frog-bit was separated into leaves, stems and roots. For biomass measurement all plants growing on 1 square meter were collected, dried and weighed. At the same time, water samples were collected from each reservoir and their pH and EC were determined. Water samples were filtered and acidified and plant samples were digested in concentrated nitric acid. Next, the content of Ca, Cu, Fe, K, Mg, Mn, Ni and Zn was determined using atomic absorption method (AAS). Statistical analysis showed that C. demersum and organs of H. morsus-ranae differed significantly in respect of metals content (Kruskal-Wallis Anova, p<0.05). Contents of Cu, Mn, Ni and Zn were higher in hornwort, while European frog-bit contained more Ca, Fe, K, Mg. Bioaccumulation Factors (BCF=content in plant/concentration in water) showed similar pattern of metal bioaccumulation – microelements were more intensively accumulated by hornwort and macroelements by frog-bit. Based on BCF values both species may be positively evaluated as good accumulators of Cu, Fe, Mn, Ni and Zn. However, the distribution of metals in H. morsus-ranae was uneven – the majority of studied elements were retained in roots, which may indicate to existence of physiological barriers developed for dealing with toxicity. Some percent of Ca and K was actively transported to stems, but to leaves Mg only. Although the biomass of C. demersum was two times greater than biomass of H. morsus-ranae, the element off-take was greater only for Cu, Mn, Ni and Zn. Nevertheless, it can be stated that despite a relatively small biomass, compared to other macrophytes, both species may have an influence on the removal of trace elements from aquatic ecosystems and, as they serve as food for some animals, also on the incorporation of toxic elements into food chains. There was a significant positive correlation between content of Mn and Fe in water and roots of H. morus-ranae (R=0.51 and R=0.60, respectively) as well as between Cu concentration in water and in C. demersum (R=0.41) (Spearman rank correlation, p<0.05). High bioaccumulation rates and correlation between plants and water elements concentrations point to their possible use as passive biomonitors of aquatic pollution.

Keywords: aquatic plants, bioaccumulation, biomonitoring, macroelements, phytoremediation, trace metals

Procedia PDF Downloads 159
40 Soybean Oil Based Phase Change Material for Thermal Energy Storage

Authors: Emre Basturk, Memet Vezir Kahraman

Abstract:

In many developing countries, with the rapid economic improvements, energy shortage and environmental issues have become a serious problem. Therefore, it has become a very critical issue to improve energy usage efficiency and also protect the environment. Thermal energy storage system is an essential approach to match the thermal energy claim and supply. Thermal energy can be stored by heating, cooling or melting a material with the energy and then enhancing accessible when the procedure is reversed. The overall thermal energy storage techniques are sorted as; latent heat or sensible heat thermal energy storage technology segments. Among these methods, latent heat storage is the most effective method of collecting thermal energy. Latent heat thermal energy storage depend on the storage material, emitting or discharging heat as it undergoes a solid to liquid, solid to solid or liquid to gas phase change or vice versa. Phase change materials (PCMs) are promising materials for latent heat storage applications due to their capacities to accumulate high latent heat storage per unit volume by phase change at an almost constant temperature. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. Organic PCMs are rather expensive and they have average latent heat storage per unit volume and also have low density. Most organic PCMs are combustible in nature and also have a wide range of melting point. Organic PCMs can be categorized into two major categories: non-paraffinic and paraffin materials. Paraffin materials have been extensively used, due to their high latent heat and right thermal characteristics, such as minimal super cooling, varying phase change temperature, low vapor pressure while melting, good chemical and thermal stability, and self-nucleating behavior. Ultraviolet (UV)-curing technology has been generally used because it has many advantages, such as low energy consumption , high speed, high chemical stability, room-temperature operation, low processing costs and environmental friendly. For many years, PCMs have been used for heating and cooling industrial applications including textiles, refrigerators, construction, transportation packaging for temperature-sensitive products, a few solar energy based systems, biomedical and electronic materials. In this study, UV-curable, fatty alcohol containing soybean oil based phase change materials (PCMs) were obtained and characterized. The phase transition behaviors and thermal stability of the prepared UV-cured biobased PCMs were analyzed by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). The heating process phase change enthalpy is measured between 30 and 68 J/g, and the freezing process phase change enthalpy is found between 18 and 70 J/g. The decomposition of UVcured PCMs started at 260 ºC and reached a maximum of 430 ºC.

Keywords: fatty alcohol, phase change material, thermal energy storage, UV curing

Procedia PDF Downloads 349
39 Assessment of Psychological Needs and Characteristics of Elderly Population for Developing Information and Communication Technology Services

Authors: Seung Ah Lee, Sunghyun Cho, Kyong Mee Chung

Abstract:

Rapid population aging became a worldwide demographic phenomenon due to rising life expectancy and declining fertility rates. Considering the current increasing rate of population aging, it is assumed that Korean society enters into a ‘super-aged’ society in 10 years, in which people aged 65 years or older account for more than 20% of entire population. In line with this trend, ICT services aimed to help elderly people to improve the quality of life have been suggested. However, existing ICT services mainly focus on supporting health or nursing care and are somewhat limited to meet a variety of specialized needs and challenges of this population. It is pointed out that the majority of services have been driven by technology-push policies. Given that the usage of ICT services greatly vary on individuals’ socio-economic status (SES), physical and psychosocial needs, this study systematically categorized elderly population into sub-groups and identified their needs and characteristics related to ICT usage in detail. First, three assessment criteria (demographic variables including SES, cognitive functioning level, and emotional functioning level) were identified based on previous literature, experts’ opinions, and focus group interview. Second, survey questions for needs assessment were developed based on the criteria and administered to 600 respondents from a national probability sample. The questionnaire consisted of 67 items concerning demographic information, experience on ICT services and information technology (IT) devices, quality of life and cognitive functioning, etc. As the result of survey, age (60s, 70s, 80s), education level (college graduates or more, middle and high school, less than primary school) and cognitive functioning level (above the cut-off, below the cut-off) were considered the most relevant factors for categorization and 18 sub-groups were identified. Finally, 18 sub-groups were clustered into 3 groups according to following similarities; computer usage rate, difficulties in using ICT, and familiarity with current or previous job. Group 1 (‘active users’) included those who with high cognitive function and educational level in their 60s and 70s. They showed favorable and familiar attitudes toward ICT services and used the services for ‘joyful life’, ‘intelligent living’ and ‘relationship management’. Group 2 (‘potential users’), ranged from age of 60s to 80s with high level of cognitive function and mostly middle to high school graduates, reported some difficulties in using ICT and their expectations were lower than in group 1 despite they were similar to group 1 in areas of needs. Group 3 (‘limited users’) consisted of people with the lowest education level or cognitive function, and 90% of group reported difficulties in using ICT. However, group 3 did not differ from group 2 regarding the level of expectation for ICT services and their main purpose of using ICT was ‘safe living’. This study developed a systematic needs assessment tool and identified three sub-groups of elderly ICT users based on multi-criteria. It is implied that current cognitive function plays an important role in using ICT and determining needs among the elderly population. Implications and limitations were further discussed.

Keywords: elderly population, ICT, needs assessment, population aging

Procedia PDF Downloads 123
38 Rainwater Harvesting is an Effective Tool for City’s Storm Water Management and People’s Willingness to Install Rainwater Harvesting System in Buildings: A Case Study in Kazipara, Dhaka, Bangladesh

Authors: M. Abu Hanif, Anika Tabassum, Fuad Hasan Ovi, Ishrat Islam

Abstract:

Water is essential for life. Enormous quantities of water are cycled each year through hydrologic cycle but only a fraction of circulated water is available each year for human use. Dhaka, the capital of Bangladesh is the 19th mega city in the world with a population of over 14 million (World City Information, 2011). As a result the growth of urban population is increasing rapidly; the city is not able to manage with altering situations due to resource limitations and management capacity. Water crisis has become an acute problem faced by the inhabitants of Dhaka city. It is found that total water demand in Dhaka city is 2,240 million liter per day (MLD) whereas supply is 2,150 (MLD). According to Dhaka Water Supply and Sewerage Authority about 87 percent of this supply comes from groundwater resources and rest 13 percent from surface water. According to Dhaka Water Supply and Sewerage Authority it has been found that the current groundwater depletion rate is 3.52 meter per year. Such a fast depletion of the water table will result in intrusion of southern saline water into the groundwater reservoir, depriving this mega city of pure drinking water. This study mainly focus on the potential of Rainwater Harvesting System(RWHS) in Kazipara area of Dhaka city, determine the perception level of local people in installation of rainwater harvesting system in their building and identify the factors regarding willingness of owner in installing rainwater harvesting system. As most of the residential area of Dhaka city is unplanned with small plots, Kazipara area has been chosen as study area which depicts similar characteristics. In this study only roof top area is considered as catchment area and potential of rainwater harvesting has been calculated. From the calculation it is found that harvested rainwater can serve the 66% of demand of water for toilet flushing and cleaning purposes for the people of Kazipara. It is also observed that if only rooftop rainwater harvesting applied to all the structures of the study area then two third of surface runoff would be reduced than present surface runoff. In determining the perception of local people only owners of the buildings were. surveyed. From the questionnaire survey it is found that around 75% people have no idea about the rainwater harvesting system. About 83% people are not willing to install rainwater harvesting system in their dwelling. The reasons behind the unwillingness are high cost of installation, inadequate space, ignorance about the system, etc. Among 16% of the willing respondents who are interested in installing RWHS system, it was found that higher income, bigger size of buildings are important factors in willingness of installing rainwater harvesting system. Majority of the respondents demanded for both technical and economical support to install the system in their buildings. Government of Bangladesh has taken some initiatives to promote rainwater harvesting in urban areas. It is very much necessary to incorporate rainwater harvesting device and artificial recharge system in every building of Dhaka city to make Dhaka city self sufficient in water supply management and to solve water crisis problem of megacity like as Dhaka city.

Keywords: rainwater harvesting, water table, willingness, storm water

Procedia PDF Downloads 220
37 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories

Authors: Oibar Martinez, Clara Oliver

Abstract:

The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.

Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations

Procedia PDF Downloads 86
36 Antioxidant Activity of Some Important Indigenous Plant Foods of the North Eastern Region of India

Authors: L. Bidyalakshmi, R. Ananthan, T. Longvah

Abstract:

Antioxidants are substances that can prevent or delay oxidative damage of lipids, proteins and nucleic acids by reactive oxygen species. These help in lowering incidence of degenerative diseases such as cancer, arthritis, atherosclerosis, heart disease, inflammation, brain dysfunction and acceleration of the ageing process. The north eastern part of India falls among the global hotspots of biodiversity. Over the years, the local communities in the region have developed ingenious uses of many wild plants within their environment as food sources. Many of these less familiar foods form an integral part of the diet of these communities, and some are traditionally valued for its therapeutic effects. So the study was carried to estimate the antioxidant activity of some of these indigenous foods. Twenty-eight indigenous plant foods were studied for their antioxidant activity. Antioxidant activities were determined by using DPPH (2, 2-diphenyl-1-picrylhydrazyl) assay, FRAP (Ferric Reducing Antioxidant Power) assay and SOSA (Super Oxide Scavenging Assay). Out of the twenty-eight plant foods, there were thirteen leafy vegetables, four fruits, five roots and tubers, four spices and two mushrooms. Water extract and methanol extract of the samples were used for the analysis. The leafy vegetable samples exhibited antioxidant capacity with IC50 ranging from 8-1414 mg/ml for lipid extract and 34-37878 mg/ml for aqueous extract in DPPH assay. Total FRAP value ranging from 58-1005 mmol FeSO4 Eq/100g of the sample, which is comparatively higher than the antioxidant capacity of some commonly consumed leafy vegetables. In SOSA, water extract of leafy vegetables show a range of 0.05-193.68 µmol ascorbic acid equivalent/g of the samples. While the methanol extract of the samples show 0.20-21.94 µmol Trolox equivalent/g of the samples. Polygonum barbatum, Wendlandia glabrata and Polygonum posumbu have higher antioxidant activity among the leafy vegetables analysed. Among the fruits, Rhus hookerii showed the highest antioxidant activities in both FRAP and SOSA methods while Spondias magnifera exhibited higher antioxidant activity in DPPH method. Alocasia cucullata exhibited higher antioxidant activity in DPPH and FRAP assays while Alpinia galanga showed higher antioxidant activity in SOSA assay when compared to the other samples of roots and tubers. Elsholtzia communis showed high antioxidant activity in all the three parameters among the spices. For the mushrooms, Pleurotus ostreatus exhibited higher antioxidant activity than Auricularia delicate in DPPH and SOSA. The samples analysed exhibited antioxidant activity at varying levels and some exhibited higher antioxidant activity than the commonly consumed foods. So consumption of these less familiar foods may play a role in preventing human disease in which free radicals are involved. Further studies on these food samples on phytonutrients and its contribution to the antioxidant activities are required.

Keywords: antioxidant activity, DPPH, FRAP, SOSA

Procedia PDF Downloads 257
35 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model

Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson

Abstract:

The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.

Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania

Procedia PDF Downloads 73
34 Characterization and Evaluation of the Dissolution Increase of Molecular Solid Dispersions of Efavirenz

Authors: Leslie Raphael de M. Ferraz, Salvana Priscylla M. Costa, Tarcyla de A. Gomes, Giovanna Christinne R. M. Schver, Cristóvão R. da Silva, Magaly Andreza M. de Lyra, Danilo Augusto F. Fontes, Larissa A. Rolim, Amanda Carla Q. M. Vieira, Miracy M. de Albuquerque, Pedro J. Rolim-Neto

Abstract:

Efavirenz (EFV) is a drug used as first-line treatment of AIDS. However, it has poor aqueous solubility and wettability, presenting problems in the gastrointestinal tract absorption and bioavailability. One of the most promising strategies to improve the solubility is the use of solid dispersions (SD). Therefore, this study aimed to characterize SD EFZ with the polymers: PVP-K30, PVPVA 64 and SOLUPLUS in order to find an optimal formulation to compose a future pharmaceutical product for AIDS therapy. Initially, Physical Mixtures (PM) and SD with the polymers were obtained containing 10, 20, 50 and 80% of drug (w/w) by the solvent method. The best formulation obtained between the SD was selected by in vitro dissolution test. Finally, the drug-carrier system chosen, in all ratios obtained, were analyzed by the following techniques: Differential Scanning Calorimetry (DSC), polarization microscopy, Scanning Electron Microscopy (SEM) and spectrophotometry of absorption in the region of infrared (IR). From the dissolution profiles of EFV, PM and SD, the values of area Under The Curve (AUC) were calculated. The data showed that the AUC of all PM is greater than the isolated EFV, this result is derived from the hydrophilic properties of the polymers thus favoring a decrease in surface tension between the drug and the dissolution medium. In adittion, this ensures an increasing of wettability of the drug. In parallel, it was found that SD whom had higher AUC values, were those who have the greatest amount of polymer (with only 10% drug). As the amount of drug increases, it was noticed that these results either decrease or are statistically similar. The AUC values of the SD using the three different polymers, followed this decreasing order: SD PVPVA 64-EFV 10% > SD PVP-K30-EFV 10% > SD Soluplus®-EFV 10%. The DSC curves of SD’s did not show the characteristic endothermic event of drug melt process, suggesting that the EFV was converted to its amorphous state. The analysis of polarized light microscopy showed significant birefringence of the PM’s, but this was not observed in films of SD’s, thus suggesting the conversion of the drug from the crystalline to the amorphous state. In electron micrographs of all PM, independently of the percentage of the drug, the crystal structure of EFV was clearly detectable. Moreover, electron micrographs of the SD with the two polymers in different ratios investigated, we observed the presence of particles with irregular size and morphology, also occurring an extensive change in the appearance of the polymer, not being possible to differentiate the two components. IR spectra of PM corresponds to the overlapping of polymer and EFV bands indicating thereby that there is no interaction between them, unlike the spectra of all SD that showed complete disappearance of the band related to the axial deformation of the NH group of EFV. Therefore, this study was able to obtain a suitable formulation to overcome the solubility limitations of the EFV, since SD PVPVA 64-EFZ 10% was chosen as the best system in delay crystallization of the prototype, reaching higher levels of super saturation.

Keywords: characterization, dissolution, Efavirenz, solid dispersions

Procedia PDF Downloads 611
33 An Eco-Systemic Typology of Fashion Resale Business Models in Denmark

Authors: Mette Dalgaard Nielsen

Abstract:

The paper serves the purpose of providing an eco-systemic typology of fashion resale business models in Denmark while pointing to possibilities to learn from its wisdom during a time when a fundamental break with the dominant linear fashion paradigm has become inevitable. As we transgress planetary boundaries and can no longer continue the unsustainable path of over-exploiting the Earth’s resources, the global fashion industry faces a tremendous need for change. One of the preferred answers to the fashion industry’s sustainability crises lies in the circular economy, which aims to maximize the utilization of resources by keeping garments in use for longer. Thus, in the context of fashion, resale business models that allow pre-owned garments to change hands with the purpose of being reused in continuous cycles are considered to be among the most efficient forms of circularity. Methodologies: The paper is based on empirical data from an ongoing project and a series of qualitative pilot studies that have been conducted on the Danish resale market over a 2-year time period from Fall 2021 to Fall 2023. The methodological framework is comprised of (n) ethnography and fieldwork in selected resale environments, as well as semi-structured interviews and a workshop with eight business partners from the Danish fashion and textiles industry. By focusing on the real-world circulation of pre-owned garments, which is enabled by the identified resale business models, the research lets go of simplistic hypotheses to the benefit of dynamic, vibrant and non-linear processes. As such, the paper contributes to the emerging research field of circular economy and fashion, which finds itself in a critical need to move from non-verified concepts and theories to empirical evidence. Findings: Based on the empirical data and anchored in the business partners, the paper analyses and presents five distinct resale business models with different product, service and design characteristics. These are 1) branded resale, 2) trade-in resale, 3) peer-2-peer resale, 4) resale boutiques and consignment shops and 5) resale shelf/square meter stores and flea markets. Together, the five business models represent a plurality of resale-promoting business model design elements that have been found to contribute to the circulation of pre-owned garments in various ways for different garments, users and businesses in Denmark. Hence, the provided typology points to the necessity of prioritizing several rather than single resale business model designs, services and initiatives for the resale market to help reconfigure the linear fashion model and create a circular-ish future. Conclusions: The article represents a twofold research ambition by 1) presenting an original, up-to-date eco-systemic typology of resale business models in Denmark and 2) using the typology and its eco-systemic traits as a tool to understand different business model design elements and possibilities to help fashion grow out of its linear growth model. By basing the typology on eco-systemic mechanisms and actual exemplars of resale business models, it becomes possible to envision the contours of a genuine alternative to business as usual that ultimately helps bend the linear fashion model towards circularity.

Keywords: circular business models, circular economy, fashion, resale, strategic design, sustainability

Procedia PDF Downloads 35
32 Hydrocarbons and Diamondiferous Structures Formation in Different Depths of the Earth Crust

Authors: A. V. Harutyunyan

Abstract:

The investigation results of rocks at high pressures and temperatures have revealed the intervals of changes of seismic waves and density, as well as some processes taking place in rocks. In the serpentinized rocks, as a consequence of dehydration, abrupt changes in seismic waves and density have been recorded. Hydrogen-bearing components are released which combine with carbon-bearing components. As a result, hydrocarbons formed. The investigated samples are smelted. Then, geofluids and hydrocarbons migrate into the upper horizons of the Earth crust by the deep faults. Then their differentiation and accumulation in the jointed rocks of the faults and in the layers with collecting properties takes place. Under the majority of the hydrocarbon deposits, at a certain depth, magmatic centers and deep faults are recorded. The investigation results of the serpentinized rocks with numerous geological-geophysical factual data allow understanding that hydrocarbons are mainly formed in both the offshore part of the ocean and at different depths of the continental crust. Experiments have also shown that the dehydration of the serpentinized rocks is accompanied by an explosion with the instantaneous increase in pressure and temperature and smelting the studied rocks. According to numerous publications, hydrocarbons and diamonds are formed in the upper part of the mantle, at the depths of 200-400km, and as a consequence of geodynamic processes, they rise to the upper horizons of the Earth crust through narrow channels. However, the genesis of metamorphogenic diamonds and the diamonds found in the lava streams formed within the Earth crust, remains unclear. As at dehydration, super high pressures and temperatures arise. It is assumed that diamond crystals are formed from carbon containing components present in the dehydration zone. It can be assumed that besides the explosion at dehydration, secondary explosions of the released hydrogen take place. The process is naturally accompanied by seismic phenomena, causing earthquakes of different magnitudes on the surface. As for the diamondiferous kimberlites, it is well-known that the majority of them are located within the ancient shield and platforms not obligatorily connected with the deep faults. The kimberlites are formed at the shallow location of dehydrated masses in the Earth crust. Kimberlites are younger in respect of containing ancient rocks containing serpentinized bazites and ultrbazites of relicts of the paleooceanic crust. Sometimes, diamonds containing water and hydrocarbons showing their simultaneous genesis are found. So, the geofluids, hydrocarbons and diamonds, according to the new concept put forward, are formed simultaneously from serpentinized rocks as a consequence of their dehydration at different depths of the Earth crust. Based on the concept proposed by us, we suggest discussing the following: -Genesis of gigantic hydrocarbon deposits located in the offshore area of oceans (North American, Mexican Gulf, Cuanza-Kamerunian, East Brazilian etc.) as well as in the continental parts of different mainlands (Kanadian-Arctic Caspian, East Siberian etc.) - Genesis of metamorphogenic diamonds and diamonds in the lava streams (Guinea-Liberian, Kokchetav, Kanadian, Kamchatka-Tolbachinian, etc.).

Keywords: dehydration, diamonds, hydrocarbons, serpentinites

Procedia PDF Downloads 316
31 Multi-Criteria Geographic Information System Analysis of the Costs and Environmental Impacts of Improved Overland Tourist Access to Kaieteur National Park, Guyana

Authors: Mark R. Leipnik, Dahlia Durga, Linda Johnson-Bhola

Abstract:

Kaieteur is the most iconic National Park in the rainforest-clad nation of Guyana in South America. However, the magnificent 226-meter-high waterfall at its center is virtually inaccessible by surface transportation, and the occasional charter flights to the small airstrip in the park are too expensive for many tourists and residents. Thus, the largest waterfall in all of Amazonia, where the Potaro River plunges over a single free drop twice as high as Victoria Falls, remains preserved in splendid isolation inside a 57,000-hectare National Park established by the British in 1929, in the deepest recesses of a remote jungle canyon. Kaieteur Falls are largely unseen firsthand, but images of the falls are depicted on the Guyanese twenty dollar note, in every Guyanese tourist promotion, and on many items in the national capital of Georgetown. Georgetown is only 223-241 kilometers away from the falls. The lack of a single mileage figure demonstrates there is no single overland route. Any journey, except by air, involves changes of vehicles, a ferry ride, and a boat ride up a jungle river. It also entails hiking for many hours to view the falls. Surface access from Georgetown (or any city) is thus a 3-5 day-long adventure; even in the dry season, during the two wet seasons, travel is a particularly sticky proposition. This journey was made overland by the paper's co-author Dahlia Durga. This paper focuses on potential ways to improve overland tourist access to Kaieteur National Park from Georgetown. This is primarily a GIS-based analysis, using multiple criteria to determine the least cost means of creating all-weather road access to the area near the base of the falls while minimizing distance and elevation changes. Critically, it also involves minimizing the number of new bridges required to be built while utilizing the one existing ferry crossings of a major river. Cost estimates are based on data from road and bridge construction engineers operating currently in the interior of Guyana. The paper contains original maps generated with ArcGIS of the potential routes for such an overland connection, including the one deemed optimal. Other factors, such as the impact on endangered species habitats and Indigenous populations, are considered. This proposed infrastructure development is taking place at a time when Guyana is undergoing the largest boom in its history due to revenues from offshore oil and gas development. Thus, better access to the most important tourist attraction in the country is likely to happen eventually in some manner. But the questions of the most environmentally sustainable and least costly alternatives for such access remain. This paper addresses those questions and others related to access to this magnificent natural treasure and the tradeoffs such access will have on the preservation of the currently pristine natural environment of Kaieteur Falls.

Keywords: nature tourism, GIS, Amazonia, national parks

Procedia PDF Downloads 126
30 A Study on the Chemical Composition of Kolkheti's Sphagnum Peat Peloids to Evaluate the Perspective of Use in Medical Practice

Authors: Al. Tsertsvadze. L. Ebralidze, I. Matchutadze. D. Berashvili, A. Bakuridze

Abstract:

Peatlands are landscape elements, they are formed over a very long period by physical, chemical, biologic, and geologic processes. In the moderate zone of Caucasus, the Kolkheti lowlands are distinguished by the diversity of relictual plants, a high degree of endemism, orographic, climate, landscape, and other characteristics of high levels of biodiversity. The unique properties of the Kolkheti region lead to the formation of special, so-called, endemic peat peloids. The composition and properties of peloids strongly depend on peat-forming plants. Peat is considered a unique complex of raw materials, which can be used in different fields of the industry: agriculture, metallurgy, energy, biotechnology, chemical industry, health care. They are formed in permanent wetland areas. As a result of decay, higher plants remain in the anaerobic area, with the participation of microorganisms. Peat mass absorbs soil and groundwater. Peloids are predominantly rich with humic substances, which are characterized by high biological activity. Humic acids stimulate enzymatic activity, regenerative processes, and have anti-inflammatory activity. Objects of the research were Kolkheti peat peloids (Ispani, Anaklia, Churia, Chirukhi, Peranga) possessing different formation phases. Due to specific physical and chemical properties of research objects, the aim of the research was to develop analytical methods in order to study the chemical composition of the objects. The research was held using modern instrumental methods of analysis: Ultraviolet-visible spectroscopy and Infrared spectroscopy, Scanning Electron Microscopy, Centrifuge, dry oven, Ultraturax, pH meter, fluorescence spectrometer, Gas chromatography-mass spectrometry (GC-MS/MS), Gas chromatography. Based on the research ration between organic and inorganic substances, the spectrum of micro and macro elements, also the content of minerals was determined. The content of organic nitrogen was determined using the Kjeldahl method. The total composition of amino acids was studied by a spectrophotometric method using standard solutions of glutamic and aspartic acids. Fatty acid was determined using GC (Gas chromatography). Based on the obtained results, we can conclude that the method is valid to identify fatty acids in the research objects. The content of organic substances in the research objects was held using GC-MS. Using modern instrumental methods of analysis, the chemical composition of research objects was studied. Each research object is predominantly reached with a broad spectrum of organic (fatty acids, amino acids, carbocyclic and heterocyclic compounds, organic acids and their esters, steroids) and inorganic (micro and macro elements, minerals) substances. Modified methods used in the presented research may be utilized for the evaluation of cosmetological balneological and pharmaceutical means prepared on the base of Kolkheti's Sphagnum Peat Peloids.

Keywords: modern analytical methods, natural resources, peat, chemistry

Procedia PDF Downloads 107
29 The Role of Uterine Artery Embolization in the Management of Postpartum Hemorrhage

Authors: Chee Wai Ku, Pui See Chin

Abstract:

As an emerging alternative to hysterectomy, uterine artery embolization (UAE) has been widely used in the management of fibroids and in controlling postpartum hemorrhage (PPH) unresponsive to other therapies. Research has shown UAE to be a safe, minimally invasive procedure with few complications and minimal effects on future fertility. We present two cases highlighting the use of UAE in preventing PPH in a patient with a large fibroid at the time of cesarean section and in the treatment of secondary PPH refractory to other therapies in another patient. We present a 36-year primiparous woman who booked at 18+6 weeks gestation with a 13.7 cm subserosal fibroid at the lower anterior wall of the uterus near the cervix and a 10.8 cm subserosal fibroid in the left wall. Prophylactic internal iliac artery occlusion balloons were placed prior to the planned classical midline cesarean section. The balloons were inflated once the baby was delivered. Bilateral uterine arteries were embolized subsequently. The estimated blood loss (EBL) was 400 mls and hemoglobin (Hb) remained stable at 10 g/DL. Ultrasound scan 2 years postnatally showed stable uterine fibroids 10.4 and 7.1 cm, which was significantly smaller than before. We present the second case of a 40-year-old G2P1 with a previous cesarean section for failure to progress. There were no antenatal problems, and the placenta was not previa. She presented with term labour and underwent an emergency cesarean section for failed vaginal birth after cesarean. Intraoperatively extensive adhesions were noted with bladder drawn high, and EBL was 300 mls. Postpartum recovery was uneventful. She presented with secondary PPH 3 weeks later complicated by hypovolemic shock. She underwent an emergency examination under anesthesia and evacuation of the uterus, with EBL 2500mls. Histology showed decidua with chronic inflammation. She was discharged well with no further PPH. She subsequently returned one week later for secondary PPH. Bedside ultrasound showed that the endometrium was thin with no evidence of retained products of conception. Uterotonics were administered, and examination under anesthesia was performed, with uterine Bakri balloon and vaginal pack insertion after. EBL was 1000 mls. There was no definite cause of PPH with no uterine atony or products of conception. To evaluate a potential cause, pelvic angiogram and super selective left uterine arteriogram was performed which showed profuse contrast extravasation and acute bleeding from the left uterine artery. Superselective embolization of the left uterine artery was performed. No gross contrast extravasation from the right uterine artery was seen. These two cases demonstrated the superior efficacy of UAE. Firstly, the prophylactic use of intra-arterial balloon catheters in pregnant patients with large fibroids, and secondly, in the diagnosis and management of secondary PPH refractory to uterotonics and uterine tamponade. In both cases, the need for laparotomy hysterectomy was avoided, resulting in the preservation of future fertility. UAE should be a consideration for hemodynamically stable patients in centres with access to interventional radiology.

Keywords: fertility preservation, secondary postpartum hemorrhage, uterine embolization, uterine fibroids

Procedia PDF Downloads 166
28 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 44