Search results for: real volume
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7443

Search results for: real volume

93 E-Governance: A Key for Improved Public Service Delivery

Authors: Ayesha Akbar

Abstract:

Public service delivery has witnessed a significant improvement with the integration of information communication technology (ICT). It not only improves management structure with advanced technology for surveillance of service delivery but also provides evidence for informed decisions and policy. Pakistan’s public sector organizations have not been able to produce some good results to ensure service delivery. Notwithstanding, some of the public sector organizations in Pakistan has diffused modern technology and proved their credence by providing better service delivery standards. These good indicators provide sound basis to integrate technology in public sector organizations and shift of policy towards evidence based policy making. Rescue-1122 is a public sector organization which provides emergency services and proved to be a successful model for the provision of service delivery to save human lives and to ensure human development in Pakistan. The information about the organization has been received by employing qualitative research methodology. The information is broadly based on primary and secondary sources which includes Rescue-1122 website, official reports of organizations; UNDP (United Nation Development Program), WHO (World Health Organization) and by conducting 10 in-depth interviews with the high administrative staff of organizations who work in the Lahore offices. The information received has been incorporated with the study for the better understanding of the organization and their management procedures. Rescue-1122 represents a successful model in delivering the services in an efficient way to deal with the disaster management. The management of Rescue has strategized the policies and procedures in such a way to develop a comprehensive model with the integration of technology. This model provides efficient service delivery as well as maintains the standards of the organization. The service delivery model of rescue-1122 works on two fronts; front-office interface and the back-office interface. Back-office defines the procedures of operations and assures the compliance of the staff whereas, front-office equipped with the latest technology and good infrastructure handles the emergency calls. Both ends are integrated with satellite based vehicle tracking, wireless system, fleet monitoring system and IP camera which monitors every move of the staff to provide better services and to pinpoint the distortions in the services. The standard time of reaching to the emergency spot is 7 minutes, and during entertaining the case; driver‘s behavior, traffic volume and the technical assistance being provided to the emergency case is being monitored by front-office. Then the whole information get uploaded to the main dashboard of Lahore headquarter from the provincial offices. The latest technology is being materialized by Rescue-1122 for delivering the efficient services, investigating the flaws; if found, and to develop data to make informed decision making. The other public sector organizations of Pakistan can also develop such models to integrate technology for improving service delivery and to develop evidence for informed decisions and policy making.

Keywords: data, e-governance, evidence, policy

Procedia PDF Downloads 221
92 Analyzing the Investment Decision and Financing Method of the French Small and Medium-Sized Enterprises

Authors: Eliane Abdo, Olivier Colot

Abstract:

SMEs are always considered as a national priority due to their contribution to job creation, innovation and growth. Once the start-up phase is crossed with encouraging results, the company enters the phase of growth. In order to improve its competitiveness, maintain and increase its market share, the company is in the necessity even the obligation to develop its tangible and intangible investments. SMEs are generally closed companies with special and critical financial situation, limited resources and difficulty to access the capital markets; their shareholders are always living in a conflict between their independence and their need to increase capital that leads to the entry of new shareholder. The capital structure was always considered the core of research in corporate finance; moreover, the financial crisis and its repercussions on the credit’s availability, especially for SMEs make SME financing a hot topic. On the other hand, financial theories do not provide answers to capital structure’s questions; they offer tools and mode of financing that are more accessible to larger companies. Yet, SME’s capital structure can’t be independent of their governance structure. The classic financial theory supposes independence between the investment decision and the financing decision. Thus, investment determines the volume of funding, but not the split between internal or external funds. In this context, we find interesting to study the hypothesis that SMEs respond positively to the financial theories applied to large firms and to check if they are constrained by conventional solutions used by large companies. In this context, this research focuses on the analysis of the resource’s structure of SME in parallel with their investments’ structure, in order to highlight a link between their assets and liabilities structure. We founded our conceptual model based on two main theoretical frameworks: the Pecking order theory, and the Trade Off theory taking into consideration the SME’s characteristics. Our data were generated from DIANE database. Five hypotheses were tested via a panel regression to understand the type of dependence between the financing methods of 3,244 French SMEs and the development of their investment over a period of 10 years (2007-2016). The results show dependence between equity and internal financing in case of intangible investments development. Moreover, this type of business is constraint to financial debts since the guarantees provided are not sufficient to meet the banks' requirements. However, for tangible investments development, SMEs count sequentially on internal financing, bank borrowing, and new shares issuance or hybrid financing. This is compliant to the Pecking Order Theory. We, therefore, conclude that unlisted SMEs incur more financial debts to finance their tangible investments more than their intangible. However, they always prefer internal financing as a first choice. This seems to be confirmed by the assumption that the profitability of the company is negatively related to the increase of the financial debt. Thus, the Pecking Order Theory predictions seem to be the most plausible. Consequently, SMEs primarily rely on self-financing and then go, into debt as a priority to finance their financial deficit.

Keywords: capital structure, investments, life cycle, pecking order theory, trade off theory

Procedia PDF Downloads 84
91 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs

Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.

Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour

Procedia PDF Downloads 58
90 Strategies for the Optimization of Ground Resistance in Large Scale Foundations for Optimum Lightning Protection

Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda

Abstract:

In this paper, we discuss the standard improvements which can be made to reduce the earth resistance in difficult terrains for optimum lightning protection, what are the practical limitations, and how the modeling can be refined for accurate diagnostics and ground resistance minimization. Ground resistance minimization can be made via three different approaches: burying vertical electrodes connected in parallel, burying horizontal conductive plates or meshes, or modifying the own terrain, either by changing the entire terrain material in a large volume or by adding earth-enhancing compounds. The use of vertical electrodes connected in parallel pose several practical limitations. In order to prevent loss of effectiveness, it is necessary to keep a minimum distance between each electrode, which is typically around five times larger than the electrode length. Otherwise, the overlapping of the local equipotential lines around each electrode reduces the efficiency of the configuration. The addition of parallel electrodes reduces the resistance and facilitates the measurement, but the basic parallel resistor formula of circuit theory will always underestimate the final resistance. Numerical simulation of equipotential lines around the electrodes overcomes this limitation. The resistance of a single electrode will always be proportional to the soil resistivity. The electrodes are usually installed with a backfilling material of high conductivity, which increases the effective diameter. However, the improvement is marginal, since the electrode diameter counts in the estimation of the ground resistance via a logarithmic function. Substances that are used for efficient chemical treatment must be environmentally friendly and must feature stability, high hygroscopicity, low corrosivity, and high electrical conductivity. A number of earth enhancement materials are commercially available. Many are comprised of carbon-based materials or clays like bentonite. These materials can also be used as backfilling materials to reduce the resistance of an electrode. Chemical treatment of soil has environmental issues. Some products contain copper sulfate or other copper-based compounds, which may not be environmentally friendly. Carbon-based compounds are relatively inexpensive and they do have very low resistivities, but they also feature corrosion issues. Typically, the carbon can corrode and destroy a copper electrode in around five years. These compounds also have potential environmental concerns. Some earthing enhancement materials contain cement, which, after installation acquire properties that are very close to concrete. This prevents the earthing enhancement material from leaching into the soil. After analyzing different configurations, we conclude that a buried conductive ring with vertical electrodes connected periodically should be the optimum baseline solution for the grounding of a large size structure installed on a large resistivity terrain. In order to show this, a practical example is explained here where we simulate the ground resistance of a conductive ring buried in a terrain with a resistivity in the range of 1 kOhm·m.

Keywords: grounding improvements, large scale scientific instrument, lightning risk assessment, lightning standards

Procedia PDF Downloads 110
89 Phycoremiadation of Heavy Metals by Marine Macroalgae Collected from Olaikuda, Rameswaram, Southeast Coast of India

Authors: Suparna Roy, Anatharaman Perumal

Abstract:

The industrial effluent with high amount of heavy metals is known to have adverse effects on the environment. For the removal of heavy metals from aqueous environment, different conventional treatment technologies had been applied gradually which are not economically beneficial and also produce huge quantity of toxic chemical sludge. So, bio-sorption of heavy metals by marine plant is an eco-friendly innovative and alternative technology for removal of these pollutants from aqueous environment. The aim of this study is to evaluate the capacity of heavy metals accumulation and removal by some selected marine macroalgae (seaweeds) from marine environment. Methods: Seaweeds Acanthophora spicifera (Vahl.) Boergesen, Codium tomentosum Stackhouse, Halimeda gracilis Harvey ex. J. Agardh, Gracilaria opuntia Durairatnam.nom. inval. Valoniopsis pachynema (Martens) Boergesen, Caulerpa racemosa var. macrophysa (Sonder ex Kutzing) W. R. Taylor and Hydroclathrus clathratus (C. Agardh) Howe were collected from Olaikuda (09°17.526'N-079°19.662'E), Rameshwaram, south east coast of India during post monsoon period (April’2016). Seaweeds were washed with sterilized and filtered in-situ seawater repeatedly to remove all the epiphytes and debris and clean seaweeds were kept for shade drying for one week. The dried seaweeds were grinded to powder, and one gm powder seaweeds were taken in a 250ml conical flask, and 8 ml of 10 % HNO3 (70 % pure) was added to each sample and kept in room temperature (28 ̊C) for 24 hours and then samples were heated in hotplate at 120 ̊C, boiled to evaporate up to dryness and 20 ml of Nitric acid: Percholoric acid in 4:1 were added to it and again heated to hotplate at 90 ̊C up to evaporate to dryness, then samples were kept in room temperature for few minutes to cool and 10ml 10 % HNO3 were added to it and kept for 24 hours in cool and dark place and filtered with Whatman (589/2) filter paper and the filtrates were collected in 250ml clean conical flask and diluted accurately to 25 ml volume with double deionised water and triplicate of each sample were analysed with Inductively-Coupled plasma analysis (ICP-OES) to analyse total eleven heavy metals (Ag, Cd, B, Cu, Mn, Co, Ni, Cr, Pb, Zn, and Al content of the specified species and data were statistically evaluated for standard deviation. Results: Acanthophora spicifera contains highest amount of Ag (0.1± 0.2 mg/mg) followed by Cu (0.16±0.01 mg/mg), Mn (1.86±0.02 mg/mg), B (3.59±0.2 mg/mg), Halimeda gracilis showed highest accumulation of Al (384.75±0.12mg/mg), Valoniopsis pachynema accumulates maximum amount of Co (0.12±0.01 mg/mg), Zn (0.64±0.02 mg/mg), Caulerpa racemosa var. macrophysa contains Zn (0.63±0.01), Cr (0.26±0.01 mg/mg ), Ni (0.21±0.05), Pb (0.16±0.03 ) and Cd ( 0.02±00 ). Hydroclathrus clathratus, Codium tomentosum and Gracilaria opuntia also contain adequate amount of heavy metals. Conclusions: The mentioned species of seaweeds are contributing important role for decreasing the heavy metals pollution in marine environment by bioaccumulation. So, we can utilise this species to remove excess amount of heavy metals from polluted area.

Keywords: heavy metals pollution, seaweeds, bioaccumulation, eco-friendly, phyco-remediation

Procedia PDF Downloads 208
88 The Effect of Photochemical Smog on Respiratory Health Patients in Abuja Nigeria

Authors: Christabel Ihedike, John Mooney, Monica Price

Abstract:

Summary: This study aims to critically evaluate effect of photochemical smog on respiratory health in Nigeria. Cohort of chronic obstructive pulmonary disease (COPD) patients was recruited from two large hospitals in Abuja Nigeria. Respiratory health questionnaires, daily diaries, dyspnoea scale and lung function measurement were used to obtain health data and investigate the relationship with air quality data (principally ozone, NOx and particulate pollution). Concentrations of air pollutants were higher than WHO and Nigerian air quality standard. The result suggests a correlation between measured air quality and exacerbation of respiratory illness. Introduction: Photochemical smog is a significant health challenge in most cities and its effect on respiratory health is well acknowledged. This type of pollution is most harmful to the elderly, children and those with underlying respiratory disease. This study aims to investigate impact of increasing temperature and photo-chemically generated secondary air pollutants on respiratory health in Abuja Nigeria. Method and Result: Health data was collected using spirometry to measure lung function on routine attendance at the clinic, daily diaries kept by patients and information obtained using respiratory questionnaire. Questionnaire responses (obtained using an adapted and internally validated version of St George’s Hospital Respiratory Questionnaire), shows that ‘time of wheeze’ showed an association with participants activities: 30% had worse wheeze in the morning: 10% cannot shop, 15% take long-time to get washed, 25% walk slower, 15% if hurry have to stop and 5% cannot take-bath. There was also a decrease in Forced expiratory volume in the first second and Forced Vital Capacity, and daily change in the afternoon–morning may be associated with the concentration level of pollutants. Also, dyspnoea symptoms recorded that 60% of patients were on grade 3, 25% grade 2 and 15% grade 1. Daily frequency of the number of patients in the cohort that cough /brought sputum is 78%. Air pollution in the city is higher than Nigerian and WHO standards with NOx and PM10 concentrations of 693.59ug/m-3 and 748ugm-3 being measured respectively. The result shows that air pollution may increase occurrence and exacerbation of respiratory disease. Conclusion: High temperature and local climatic conditions in urban Nigeria encourages formation of Ozone, the major constituent of photochemical smog, resulting also in the formation of secondary air pollutants associated with health challenges. In this study we confirm the likely potency of the pattern of secondary air pollution in exacerbating COPD symptoms in vulnerable patient group in urban Nigeria. There is need for better regulation and measures to reduce ozone, particularly when local climatic conditions favour development of photochemical smog in such settings. Climate change and likely increasing temperatures add impetus and urgency for better air quality standards and measures (traffic-restrictions and emissions standards) in developing world settings such as Nigeria.

Keywords: Abuja-Nigeria, effect, photochemical smog, respiratory health

Procedia PDF Downloads 194
87 Endometrial Biopsy Curettage vs Endometrial Aspiration: Better Modality in Female Genital Tuberculosis

Authors: Rupali Bhatia, Deepthi Nair, Geetika Khanna, Seema Singhal

Abstract:

Introduction: Genital tract tuberculosis is a chronic disease (caused by reactivation of organisms from systemic distribution of Mycobacterium tuberculosis) that often presents with low grade symptoms and non-specific complaints. Patients with genital tuberculosis are usually young women seeking workup and treatment for infertility. Infertility is the commonest presentation due to involvement of the fallopian tubes, endometrium and ovarian damage with poor ovarian volume and reserve. The diagnosis of genital tuberculosis is difficult because of the fact that it is a silent invader of genital tract. Since tissue cannot be obtained from fallopian tubes, the diagnosis is made by isolation of bacilli from endometrial tissue obtained by endometrial biopsy curettage and/or aspiration. Problems are associated with sampling technique as well as diagnostic modality due to lack of adequate sample volumes and the segregation of the sample for various diagnostic tests resulting in non-uniform distribution of microorganisms. Moreover, lack of an efficient sampling technique universally applicable for all specific diagnostic tests contributes to the diagnostic challenges. Endometrial sampling plays a key role in accurate diagnosis of female genital tuberculosis. It may be done by 2 methods viz. endometrial curettage and endometrial aspiration. Both endometrial curettage and aspirate have their own limitations as curettage picks up strip of the endometrium from one of the walls of the uterine cavity including tubal osteal areas whereas aspirate obtains total tissue with exfoliated cells present in the secretory fluid of the endometrial cavity. Further, sparse and uneven distribution of the bacilli remains a major factor contributing to the limitations of the techniques. The sample that is obtained by either technique is subjected to histopathological examination, AFB staining, culture and PCR. Aim: Comparison of the sampling techniques viz. endometrial biopsy curettage and endometrial aspiration using different laboratory methods of histopathology, cytology, microbiology and molecular biology. Method: In a hospital based observational study, 75 Indian females suspected of genital tuberculosis were selected on the basis of inclusion criteria. The women underwent endometrial tissue sampling using Novaks biopsy curette and Karmans cannula. One part of the specimen obtained was sent in formalin solution for histopathological testing and another part was sent in normal saline for acid fast bacilli smear, culture and polymerase chain reaction. The results so obtained were correlated using coefficient of correlation and chi square test. Result: Concordance of results showed moderate agreement between both the sampling techniques. Among HPE, AFB and PCR, maximum sensitivity was observed for PCR, though the specificity was not as high as other techniques. Conclusion: Statistically no significant difference was observed between the results obtained by the two sampling techniques. Therefore, one may use either EA or EB to obtain endometrial samples and avoid multiple sampling as both the techniques are equally efficient in diagnosing genital tuberculosis by HPE, AFB, culture or PCR.

Keywords: acid fast bacilli (AFB), histopatholgy examination (HPE), polymerase chain reaction (PCR), endometrial biopsy curettage

Procedia PDF Downloads 306
86 Determination of Slope of Hilly Terrain by Using Proposed Method of Resolution of Forces

Authors: Reshma Raskar-Phule, Makarand Landge, Saurabh Singh, Vijay Singh, Jash Saparia, Shivam Tripathi

Abstract:

For any construction project, slope calculations are necessary in order to evaluate constructability on the site, such as the slope of parking lots, sidewalks, and ramps, the slope of sanitary sewer lines, slope of roads and highways. When slopes and grades are to be determined, designers are concerned with establishing proper slopes and grades for their projects to assess cut and fill volume calculations and determine inverts of pipes. There are several established instruments commonly used to determine slopes, such as Dumpy level, Abney level or Hand Level, Inclinometer, Tacheometer, Henry method, etc., and surveyors are very familiar with the use of these instruments to calculate slopes. However, they have some other drawbacks which cannot be neglected while major surveying works. Firstly, it requires expert surveyors and skilled staff. The accessibility, visibility, and accommodation to remote hilly terrain with these instruments and surveying teams are difficult. Also, determination of gentle slopes in case of road and sewer drainage constructions in congested urban places with these instruments is not easy. This paper aims to develop a method that requires minimum field work, minimum instruments, no high-end technology or instruments or software, and low cost. It requires basic and handy surveying accessories like a plane table with a fixed weighing machine, standard weights, alidade, tripod, and ranging rods should be able to determine the terrain slope in congested areas as well as in remote hilly terrain. Also, being simple and easy to understand and perform the people of that local rural area can be easily trained for the proposed method. The idea for the proposed method is based on the principle of resolution of weight components. When any object of standard weight ‘W’ is placed on an inclined surface with a weighing machine below it, then its cosine component of weight is presently measured by that weighing machine. The slope can be determined from the relation between the true or actual weight and the apparent weight. A proper procedure is to be followed, which includes site location, centering and sighting work, fixing the whole set at the identified station, and finally taking the readings. A set of experiments for slope determination, mild and moderate slopes, are carried out by the proposed method and by the theodolite instrument in a controlled environment, on the college campus, and uncontrolled environment actual site. The slopes determined by the proposed method were compared with those determined by the established instruments. For example, it was observed that for the same distances for mild slope, the difference in the slope obtained by the proposed method and by the established method ranges from 4’ for a distance of 8m to 2o15’20” for a distance of 16m for an uncontrolled environment. Thus, for mild slopes, the proposed method is suitable for a distance of 8m to 10m. The correlation between the proposed method and the established method shows a good correlation of 0.91 to 0.99 for various combinations, mild and moderate slope, with the controlled and uncontrolled environment.

Keywords: surveying, plane table, weight component, slope determination, hilly terrain, construction

Procedia PDF Downloads 58
85 Interdisciplinary Method Development - A Way to Realize the Full Potential of Textile Resources

Authors: Nynne Nørup, Julie Helles Eriksen, Rikke M. Moalem, Else Skjold

Abstract:

Despite a growing focus on the high environmental impact of textiles, textile waste is only recently considered as part of the waste field. Consequently, there is a general lack of knowledge and data within this field. Particularly the lack of a common perception of textiles generates several problems e.g., to recognize the full material potential the fraction contains, which is cruel if the textile must enter the circular economy. This study aims to qualify a method to make the resources in textile waste visible in a way that makes it possible to move them as high up in the waste hierarchy as possible. Textiles are complex and cover many different types of products, fibers and combinations of fibers and production methods. In garments alone, there is a great variety, even when narrowing it to only undergarments. However, textile waste is often reduced to one fraction, assessed solely by quantity, and compared to quantities of other waste fractions. Disregarding the complexity and reducing textiles to a single fraction that covers everything made of textiles increase the risk of neglecting the value of the materials, both with regards to their properties and economical. Instead of trying to fit textile waste into the current primarily linear waste system where volume is a key part of the business models, this study focused on integrating textile waste as a resource in the design and production phase. The study combined interdisciplinary methods for determining replacement rates used in Life Cycle Assessments and Mass Flow Analysis methods with the designer’s toolbox to hereby activate the properties of textile waste in a way that can unleash its potential optimally. It was hypothesized that by activating Denmark's tradition for design and high level of craftsmanship, it is possible to find solutions that can be used today and create circular resource models that reduce the use of virgin fibers. Through waste samples, case studies, and testing of various design approaches, this study explored how to functionalize the method so that the product after the end-use is kept as a material and only then processed at fiber level to obtain the best environmental utilization. The study showed that the designers' ability to decode the properties of the materials and understanding of craftsmanship were decisive for how well the materials could be utilized today. The later in the life cycle the textiles appeared as waste, the more demanding the description of the materials to be sufficient, especially if to achieve the best possible use of the resources and thus a higher replacement rate. In addition, it also required adaptation in relation to the current production because the materials often varied more. The study found good indications that part of the solution is to use geodata i.e., where in the life cycle the materials were discarded. An important conclusion is that a fully developed method can help support better utilization of textile resources. However, it stills requires a better understanding of materials by the designers, as well as structural changes in business and society.

Keywords: circular economy, development of sustainable processes, environmental impacts, environmental management of textiles, environmental sustainability through textile recycling, interdisciplinary method development, resource optimization, recycled textile materials and the evaluation of recycling, sustainability and recycling opportunities in the textile and apparel sector

Procedia PDF Downloads 56
84 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model

Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson

Abstract:

The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.

Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania

Procedia PDF Downloads 71
83 Characterization of Platelet Mitochondrial Metabolism in COVID-19 caused Acute Respiratory Distress Syndrome (ARDS)

Authors: Anna Höfer, Johannes Herrmann, Patrick Meybohm, Christopher Lotz

Abstract:

Mitochondria are pivotal for energy supply and regulation of cellular functions. Deficiencies of mitochondrial metabolism have been implicated in diverse stressful conditions including infections. Platelets are key mediators for thrombo-inflammation during development and resolution of acute respiratory distress syndrome (ARDS). Previous data point to an exhausted platelet phenotype in critically-ill patients with coronavirus 19 disease (COVID-19) impacting the course of disease. The objective of this work was to characterize platelet mitochondrial metabolism in patients suffering from COVID-19 ARDSA longitudinal analysis of platelet mitochondrial metabolism in 24 patients with COVID-19 induced ARDS compared to 35 healthy controls (ctrl) was performed. Blood samples were analyzed at two time points (t1=day 1; t2=day 5-7 after study inclusion). The activity of mitochondrial citrate synthase was photometrically measured. The impact of oxidative stress on mitochondrial permeability was assessed by a photometric calcium-induced swelling assay and the activity of superoxide dismutase (SOD) by a SOD assay kit. The amount of protein carbonylation and the activity of mitochondria complexes I-IV were photometrically determined. Levels of interleukins (IL)-1α, IL-1β and tumor necrosis factor (TNF-) α were measured by a Multiplex assay kit. Median age was 54 years, 63 % were male and BMI was 29.8 kg/m2. SOFA (12; IQR: 10-15) and APACHE II (27; IQR: 24-30) indicated critical illness. Median Murray Score was 3.4 (IQR: 2.8-3.4), 21/24 (88%) required mechanical ventilation and V-V ECMO support in 14/24 (58%). Platelet counts in ARDS did not change during ICU stay (t1: 212 vs. t2: 209 x109/L). However, mean platelet volume (MPV) significantly increased (t1: 10.6 vs. t2: 11.9 fL; p<0.0001). Citrate synthase activity showed no significant differences between ctrl and ARDS patients. Calcium induced swelling was more pronounced in patients at t1 compared to t2 and to ctrl (50µM; t1: 0.006 vs. ctrl: 0.016 ΔOD; p=0.001). The amount of protein carbonylation as marker for irreversible proteomic modification constantly increased during ICU stay and compared to ctrl., without reaching significance. In parallel, superoxid dismutase activity gradually declined during ICU treatment vs. ctrl (t2: - 29 vs. ctrl.: - 17 %; p=0.0464). Complex I analysis revealed significantly stronger activity in ARDS vs. ctrl. (t1: 0.633 vs. ctrl.: 0.415 ΔOD; p=0.0086). There were no significant differences in complex II, III or IV activity in platelets from ARDS patients compared to ctrl. IL-18 constantly increased during the observation period without reaching significance. IL-1α and TNF-α did not differ from ctrl. However, IL-1β levels were significantly elevated in ARDS (t1: 16.8; t2: 16.6 vs. ctrl.: 12.4 pg/mL; p1=0.0335, p2=0.0032). This study reveals new insights in platelet mitochondrial metabolism during COVID-19 caused ARDS. it data point towards enhanced platelet activity with a pronounced turnover rate. We found increased activity of mitochondria complex I and evidence for enhanced oxidative stress. In parallel, protective mechanisms against oxidative stress were narrowed with elevated levels of IL-1β likely causing a pro-apoptotic environment. These mechanisms may contribute to platelet exhaustion in ARDS.

Keywords: acute respiratory distress syndrome (ARDS), coronavirus 19 disease (COVID-19), oxidative stress, platelet mitochondrial metabolism

Procedia PDF Downloads 19
82 Use of Sewage Sludge Ash as Partial Cement Replacement in the Production of Mortars

Authors: Domagoj Nakic, Drazen Vouk, Nina Stirmer, Mario Siljeg, Ana Baricevic

Abstract:

Wastewater treatment processes generate significant quantities of sewage sludge that need to be adequately treated and disposed. In many EU countries, the problem of adequate disposal of sewage sludge has not been solved, nor is determined by the unique rules, instructions or guidelines. Disposal of sewage sludge is important not only in terms of satisfying the regulations, but the aspect of choosing the optimal wastewater and sludge treatment technology. Among the solutions that seem reasonable, recycling of sewage sludge and its byproducts reaches the top recommendation. Within the framework of sustainable development, recycling of sludge almost completely closes the cycle of wastewater treatment in which only negligible amounts of waste that requires landfilling are being generated. In many EU countries, significant amounts of sewage sludge are incinerated, resulting in a new byproduct in the form of ash. Sewage sludge ash is three to five times less in volume compared to stabilized and dehydrated sludge, but it also requires further management. The combustion process also destroys hazardous organic components in the sludge and minimizes unpleasant odors. The basic objective of the presented research is to explore the possibilities of recycling of the sewage sludge ash as a supplementary cementitious material. This is because of the main oxides present in the sewage sludge ash (SiO2, Al2O3 and Cao, which is similar to cement), so it can be considered as latent hydraulic and pozzolanic material. Physical and chemical characteristics of ashes, generated by sludge collected from different wastewater treatment plants, and incinerated in laboratory conditions at different temperatures, are investigated since it is a prerequisite of its subsequent recycling and the eventual use in other industries. Research was carried out by replacing up to 20% of cement by mass in cement mortar mixes with different obtained ashes and examining characteristics of created mixes in fresh and hardened condition. The mixtures with the highest ash content (20%) showed an average drop in workability of about 15% which is attributed to the increased water requirements when ash was used. Although some mixes containing added ash showed compressive and flexural strengths equivalent to those of reference mixes, generally slight decrease in strength was observed. However, it is important to point out that the compressive strengths always remained above 85% compared to the reference mix, while flexural strengths remained above 75%. Ecological impact of innovative construction products containing sewage sludge ash was determined by analyzing leaching concentrations of heavy metals. Results demonstrate that sewage sludge ash can satisfy technical and environmental criteria for use in cementitious materials which represents a new recycling application for an increasingly important waste material that is normally landfilled. Particular emphasis is placed on linking the composition of generated ashes depending on its origin and applied treatment processes (stage of wastewater treatment, sludge treatment technology, incineration temperature) with the characteristics of the final products. Acknowledgement: This work has been fully supported by Croatian Science Foundation under the project '7927 - Reuse of sewage sludge in concrete industry – from infrastructure to innovative construction products'.

Keywords: cement mortar, recycling, sewage sludge ash, sludge disposal

Procedia PDF Downloads 226
81 Nanoparticle Supported, Magnetically Separable Metalloporphyrin as an Efficient Retrievable Heterogeneous Nanocatalyst in Oxidation Reactions

Authors: Anahita Mortazavi Manesh, Mojtaba Bagherzadeh

Abstract:

Metalloporphyrins are well known to mimic the activity of monooxygenase enzymes. In this regard, metalloporphyrin complexes have been largely employed as valuable biomimetic catalysts, owing to the critical roles they play in oxygen transfer processes in catalytic oxidation reactions. Investigating in this area is based on different strategies to design selective, stable and high turnover catalytic systems. Immobilization of expensive metalloporphyrin catalysts onto supports appears to be a good way to improve their stability, selectivity and the catalytic performance because of the support environment and other advantages with respect to recovery, reuse. In other words, supporting metalloporphyrins provides a physical separation of active sites, thus minimizing catalyst self-destruction and dimerization of unhindered metalloporphyrins. Furthermore, heterogeneous catalytic oxidations have become an important target since their process are used in industry, helping to minimize the problems of industrial waste treatment. Hence, the immobilization of these biomimetic catalysts is much desired. An attractive approach is the preparation of the heterogeneous catalyst involves immobilization of complexes on silica coated magnetic nano-particles. Fe3O4@SiO2 magnetic nanoparticles have been studied extensively due to their superparamagnetism property, large surface area to volume ratio and easy functionalization. Using heterogenized homogeneous catalysts is an attractive option to facile separation of catalyst, simplified product work-up and continuity of catalytic system. Homogeneous catalysts immobilized on magnetic nanoparticles (MNPs) surface occupy a unique position due to combining the advantages of both homogeneous and heterogeneous catalysts. In addition, superparamagnetic nature of MNPs enable very simple separation of the immobilized catalysts from the reaction mixture using an external magnet. In the present work, an efficient heterogeneous catalyst was prepared by immobilizing manganese porphyrin on functionalized magnetic nanoparticles through the amino propyl linkage. The prepared catalyst was characterized by elemental analysis, FT-IR spectroscopy, X-ray powder diffraction, atomic absorption spectroscopy, UV-Vis spectroscopy, and scanning electron microscopy. Application of immobilized metalloporphyrin in the oxidation of various organic substrates was explored using Gas chromatographic (GC) analyses. The results showed that the supported Mn-porphyrin catalyst (Fe3O4@SiO2-NH2@MnPor) is an efficient and reusable catalyst in oxidation reactions. Our catalytic system exhibits high catalytic activity in terms of turnover number (TON) and reaction conditions. Leaching and recycling experiments revealed that nanocatalyst can be recovered several times without loss of activity and magnetic properties. The most important advantage of this heterogenized catalytic system is the simplicity of the catalyst separation in which the catalyst can be separated from the reaction mixture by applying a magnet. Furthermore, the separation and reuse of the magnetic Fe3O4 nanoparticles were very effective and economical.

Keywords: Fe3O4 nanoparticle, immobilized metalloporphyrin, magnetically separable nanocatalyst, oxidation reactions

Procedia PDF Downloads 276
80 Production of Medicinal Bio-active Amino Acid Gamma-Aminobutyric Acid In Dairy Sludge Medium

Authors: Farideh Tabatabaee Yazdi, Fereshteh Falah, Alireza Vasiee

Abstract:

Introduction: Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is widely present in organisms. GABA is a kind of pharmacological and biological component and its application is wide and useful. Several important physiological functions of GABA have been characterized, such as neurotransmission and induction of hypotension. GABA is also a strong secretagogue of insulin from the pancreas and effectively inhibits small airway-derived lung adenocarcinoma and tranquilizer. Many microorganisms can produce GABA, and lactic acid bacteria have been a focus of research in recent years because lactic acid bacteria possess special physiological activities and are generally regarded as safe. Among them, the Lb. Brevis produced the highest amount of GABA. The major factors affecting GABA production have been characterized, including carbon sources and glutamate concentration. The use of food industry waste to produce valuable products such as amino acids seems to be a good way to reduce production costs and prevent the waste of food resources. In a dairy factory, a high volume of sludge is produced from a separator that contains useful compounds such as growth factors, carbon, nitrogen, and organic matter that can be used by different microorganisms such as Lb.brevis as carbon and nitrogen sources. Therefore, it is a good source of GABA production. GABA is primarily formed by the irreversible α-decarboxylation reaction of L-glutamic acid or its salts, catalysed by the GAD enzyme. In the present study, this aim was achieved for the fast-growing of Lb.brevis and producing GABA, using the dairy industry sludge as a suitable growth medium. Lactobacillus Brevis strains obtained from Microbial Type Culture Collection (MTCC) were used as model strains. In order to prepare dairy sludge as a medium, sterilization should be done at 121 ° C for 15 minutes. Lb. Brevis was inoculated to the sludge media at pH=6 and incubated for 120 hours at 30 ° C. After fermentation, the supernatant solution is centrifuged and then, the GABA produced was analyzed by the Thin Layer chromatography (TLC) method qualitatively and by the high-performance liquid chromatography (HPLC) method quantitatively. By increasing the percentage of dairy sludge in the culture medium, the amount of GABA increased. Also, evaluated the growth of bacteria in this medium showed the positive effect of dairy sludge on the growth of Lb.brevis, which resulted in the production of more GABA. GABA-producing LAB offers the opportunity of developing naturally fermented health-oriented products. Although some GABA-producing LAB has been isolated to find strains suitable for different fermentations, further screening of various GABA-producing strains from LAB, especially high-yielding strains, is necessary. The production of lactic acid, bacterial gamma-aminobutyric acid, is safe and eco-friendly. The use of dairy industry waste causes enhanced environmental safety. Also provides the possibility of producing valuable compounds such as GABA. In general, dairy sludge is a suitable medium for the growth of Lactic Acid Bacteria and produce this amino acid that can reduce the final cost of it by providing carbon and nitrogen source.

Keywords: GABA, Lactobacillus, HPLC, dairy sludge

Procedia PDF Downloads 99
79 Treatment Process of Sludge from Leachate with an Activated Sludge System and Extended Aeration System

Authors: A. Chávez, A. Rodríguez, F. Pinzón

Abstract:

Society is concerned about measures of environmental, economic and social impacts generated in the solid waste disposal. These places of confinement, also known as landfills, are locations where problems of pollution and damage to human health are reduced. They are technically designed and operated, using engineering principles, storing the residue in a small area, compact it to reduce volume and covering them with soil layers. Problems preventing liquid (leachate) and gases produced by the decomposition of organic matter. Despite planning and site selection for disposal, monitoring and control of selected processes, remains the dilemma of the leachate as extreme concentration of pollutants, devastating soil, flora and fauna; aggressive processes requiring priority attention. A biological technology is the activated sludge system, used for tributaries with high pollutant loads. Since transforms biodegradable dissolved and particulate matter into CO2, H2O and sludge; transform suspended and no Settleable solids; change nutrients as nitrogen and phosphorous; and degrades heavy metals. The microorganisms that remove organic matter in the processes are in generally facultative heterotrophic bacteria, forming heterogeneous populations. Is possible to find unicellular fungi, algae, protozoa and rotifers, that process the organic carbon source and oxygen, as well as the nitrogen and phosphorus because are vital for cell synthesis. The mixture of the substrate, in this case sludge leachate, molasses and wastewater is maintained ventilated by mechanical aeration diffusers. Considering as the biological processes work to remove dissolved material (< 45 microns), generating biomass, easily obtained by decantation processes. The design consists of an artificial support and aeration pumps, favoring develop microorganisms (denitrifying) using oxygen (O) with nitrate, resulting in nitrogen (N) in the gas phase. Thus, avoiding negative effects of the presence of ammonia or phosphorus. Overall the activated sludge system includes about 8 hours of hydraulic retention time, which does not prevent the demand for nitrification, which occurs on average in a value of MLSS 3,000 mg/L. The extended aeration works with times greater than 24 hours detention; with ratio of organic load/biomass inventory under 0.1; and average stay time (sludge age) more than 8 days. This project developed a pilot system with sludge leachate from Doña Juana landfill - RSDJ –, located in Bogota, Colombia, where they will be subjected to a process of activated sludge and extended aeration through a sequential Bach reactor - SBR, to be dump in hydric sources, avoiding ecological collapse. The system worked with a dwell time of 8 days, 30 L capacity, mainly by removing values of BOD and COD above 90%, with initial data of 1720 mg/L and 6500 mg/L respectively. Motivating the deliberate nitrification is expected to be possible commercial use diffused aeration systems for sludge leachate from landfills.

Keywords: sludge, landfill, leachate, SBR

Procedia PDF Downloads 245
78 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 323
77 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy

Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini

Abstract:

The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.

Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering

Procedia PDF Downloads 203
76 Ordered Mesoporous Carbons of Different Morphology for Loading and Controlled Release of Active Pharmaceutical Ingredients

Authors: Aleksander Ejsmont, Aleksandra Galarda, Joanna Goscianska

Abstract:

Smart porous carriers with defined structure and physicochemical properties are required for releasing the therapeutic drug with precise control of delivery time and location in the body. Due to their non-toxicity, ordered structure, chemical, and thermal stability, mesoporous carbons can be considered as modern carriers for active pharmaceutical ingredients (APIs) whose effectiveness needs frequent dosing algorithms. Such an API-carrier system, if programmed precisely, may stabilize the pharmaceutical and increase its dissolution leading to enhanced bioavailability. The substance conjugated with the material, through its prior adsorption, can later be successfully applied internally to the organism, as well as externally if the API release is feasible under these conditions. In the present study, ordered mesoporous carbons of different morphologies and structures, prepared by hard template method, were applied as carriers in the adsorption and controlled release of active pharmaceutical ingredients. In the first stage, the carbon materials were synthesized and functionalized with carboxylic groups by chemical oxidation using ammonium persulfate solution and then with amine groups. Materials obtained were thoroughly characterized with respect to morphology (scanning electron microscopy), structure (X-ray diffraction, transmission electron microscopy), characteristic functional groups (FT-IR spectroscopy), acid-base nature of surface groups (Boehm titration), parameters of the porous structure (low-temperature nitrogen adsorption) and thermal stability (TG analysis). This was followed by a series of tests of adsorption and release of paracetamol, benzocaine, and losartan potassium. Drug release experiments were performed in the simulated gastric fluid of pH 1.2 and phosphate buffer of pH 7.2 or 6.8 at 37.0 °C. The XRD patterns in the small-angle range and TEM images revealed that functionalization of mesoporous carbons with carboxylic or amine groups leads to the decreased ordering of their structure. Moreover, the modification caused a considerable reduction of the carbon-specific surface area and pore volume, but it simultaneously resulted in changing their acid-base properties. Mesoporous carbon materials exhibit different morphologies, which affect the host-guest interactions during the adsorption process of active pharmaceutical ingredients. All mesoporous carbons show high adsorption capacity towards drugs. The sorption capacity of materials is mainly affected by BET surface area and the structure/size matching between adsorbent and adsorbate. Selected APIs are linked to the surface of carbon materials mainly by hydrogen bonds, van der Waals forces, and electrostatic interactions. The release behavior of API is highly dependent on the physicochemical properties of mesoporous carbons. The release rate of APIs could be regulated by the introduction of functional groups and by changing the pH of the receptor medium. Acknowledgments—This research was supported by the National Science Centre, Poland (project SONATA-12 no: 2016/23/D/NZ7/01347).

Keywords: ordered mesoporous carbons, sorption capacity, drug delivery, carbon nanocarriers

Procedia PDF Downloads 151
75 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 148
74 Improved Elastoplastic Bounding Surface Model for the Mathematical Modeling of Geomaterials

Authors: Andres Nieto-Leal, Victor N. Kaliakin, Tania P. Molina

Abstract:

The nature of most engineering materials is quite complex. It is, therefore, difficult to devise a general mathematical model that will cover all possible ranges and types of excitation and behavior of a given material. As a result, the development of mathematical models is based upon simplifying assumptions regarding material behavior. Such simplifications result in some material idealization; for example, one of the simplest material idealization is to assume that the material behavior obeys the elasticity. However, soils are nonhomogeneous, anisotropic, path-dependent materials that exhibit nonlinear stress-strain relationships, changes in volume under shear, dilatancy, as well as time-, rate- and temperature-dependent behavior. Over the years, many constitutive models, possessing different levels of sophistication, have been developed to simulate the behavior geomaterials, particularly cohesive soils. Early in the development of constitutive models, it became evident that elastic or standard elastoplastic formulations, employing purely isotropic hardening and predicated in the existence of a yield surface surrounding a purely elastic domain, were incapable of realistically simulating the behavior of geomaterials. Accordingly, more sophisticated constitutive models have been developed; for example, the bounding surface elastoplasticity. The essence of the bounding surface concept is the hypothesis that plastic deformations can occur for stress states either within or on the bounding surface. Thus, unlike classical yield surface elastoplasticity, the plastic states are not restricted only to those lying on a surface. Elastoplastic bounding surface models have been improved; however, there is still need to improve their capabilities in simulating the response of anisotropically consolidated cohesive soils, especially the response in extension tests. Thus, in this work an improved constitutive model that can more accurately predict diverse stress-strain phenomena exhibited by cohesive soils was developed. Particularly, an improved rotational hardening rule that better simulate the response of cohesive soils in extension. The generalized definition of the bounding surface model provides a convenient and elegant framework for unifying various previous versions of the model for anisotropically consolidated cohesive soils. The Generalized Bounding Surface Model for cohesive soils is a fully three-dimensional, time-dependent model that accounts for both inherent and stress induced anisotropy employing a non-associative flow rule. The model numerical implementation in a computer code followed an adaptive multistep integration scheme in conjunction with local iteration and radial return. The one-step trapezoidal rule was used to get the stiffness matrix that defines the relationship between the stress increment and the strain increment. After testing the model in simulating the response of cohesive soils through extensive comparisons of model simulations to experimental data, it has been shown to give quite good simulations. The new model successfully simulates the response of different cohesive soils; for example, Cardiff Kaolin, Spestone Kaolin, and Lower Cromer Till. The simulated undrained stress paths, stress-strain response, and excess pore pressures are in very good agreement with the experimental values, especially in extension.

Keywords: bounding surface elastoplasticity, cohesive soils, constitutive model, modeling of geomaterials

Procedia PDF Downloads 292
73 Sustainable Recycling Practices to Reduce Health Hazards of Municipal Solid Waste in Patna, India

Authors: Anupama Singh, Papia Raj

Abstract:

Though Municipal Solid Waste (MSW) is a worldwide problem, yet its implications are enormous in developing countries, as they are unable to provide proper Municipal Solid Waste Management (MSWM) for the large volume of MSW. As a result, the collected wastes are dumped in open dumping at landfilling sites while the uncollected wastes remain strewn on the roadside, many-a-time clogging drainage. Such unsafe and inadequate management of MSW causes various public health hazards. For example, MSW directly on contact or by leachate contaminate the soil, surface water, and ground water; open burning causes air pollution; anaerobic digestion between the piles of MSW enhance the greenhouse gases i.e., carbon dioxide and methane (CO2 and CH4) into the atmosphere. Moreover, open dumping can cause spread of vector borne disease like cholera, typhoid, dysentery, and so on. Patna, the capital city of Bihar, one of the most underdeveloped provinces in India, is a unique representation of this situation. Patna has been identified as the ‘garbage city’. Over the last decade there has been an exponential increase in the quantity of MSW generation in Patna. Though a large proportion of such MSW is recyclable in nature, only a negligible portion is recycled. Plastic constitutes the major chunk of the recyclable waste. The chemical composition of plastic is versatile consisting of toxic compounds, such as, plasticizers, like adipates and phthalates. Pigmented plastic is highly toxic and it contains harmful metals such as copper, lead, chromium, cobalt, selenium, and cadmium. Human population becomes vulnerable to an array of health problems as they are exposed to these toxic chemicals multiple times a day through air, water, dust, and food. Based on analysis of health data it can be emphasized that in Patna there has been an increase in the incidence of specific diseases, such as, diarrhoea, dysentry, acute respiratory infection (ARI), asthma, and other chronic respiratory diseases (CRD). This trend can be attributed to improper MSWM. The results were reiterated through a survey (N=127) conducted during 2014-15 in selected areas of Patna. Random sampling method of data collection was used to better understand the relationship between different variables affecting public health due to exposure to MSW and lack of MSWM. The results derived through bivariate and logistic regression analysis of the survey data indicate that segregation of wastes at source, segregation behavior, collection bins in the area, distance of collection bins from residential area, and transportation of MSW are the major determinants of public health issues. Sustainable recycling is a robust method for MSWM with its pioneer concerns being environment, society, and economy. It thus ensures minimal threat to environment and ecology consequently improving public health conditions. Hence, this paper concludes that sustainable recycling would be the most viable approach to manage MSW in Patna and would eventually reduce public health hazards.

Keywords: municipal solid waste, Patna, public health, sustainable recycling

Procedia PDF Downloads 293
72 Investigating the Application of Composting for Phosphorous Recovery from Alum Precipitated and Ferric Precipitated Sludge

Authors: Saba Vahedi, Qiuyan Yuan

Abstract:

A vast majority of small municipalities and First Nations communities in Manitoba operate facultative or aerated lagoons for wastewater treatment, and most of them use Ferric Chloride (FeCl3) or alum (usually in the form of Al2(SO4)3 ·18H2O) as coagulant for phosphorous removal. The insoluble particles that form during the coagulation process result in a massive volume of sludge which is typically left in the lagoons. Therefore, phosphorous, which is a valuable nutrient, is lost in the process. In this project, the complete recovery of phosphorous from the sludge that is produced in the process of phosphorous removal from wastewater lagoons by using a controlled composting process is investigated. Objective The main objective of this project is to compost alum precipitated sludge that is produced in the process of phosphorous removal in wastewater treatment lagoons in Manitoba. The ultimate goal is to have a product that will meet the characteristics of Class A biosolids in Canada. A number of parameters, including the bioavailability of nutrients in the composted sludge and the toxicity of the sludge, will be evaluated Investigating the bioavailability of phosphorous in the final compost product. The compost will be used as a source of P compared to a commercial fertilizer (monoammonium phosphate MAP) Experimental setup Three different batches of composts piles have been run using the Alum sludge and Ferric sludge. The alum phosphate sludge was collected from an innovative phosphorous removal system at the RM of Taché . The collected sludge was sent to ALS laboratory to analyze the C/N ratio, TP, TN, TC, TAl, moisture contents, pH, and metals concentrations. Wood chips as the bulking agent were collected at the RM of Taché landfill The sludge in the three piles were mixed with 3x dry woodchips. The mixture was turned every week manually. The temperature, the moisture content, and pH were monitored twice a week. The temperature of the mixtures was remained above 55 °C for two weeks. Each pile was kept for ten weeks to get mature. The final products have been applied to two different plants to investigate the bioavailability of P in the compost product as well as the toxicity of the product. The two types of plants were selected based on their sensitivity, growth time, and their compatibility with the Manitoba climate, which are Canola, and switchgrass. The pots are weighed and watered every day to replenish moisture lost by evapotranspiration. A control experiment is also conducted by using topsoil soil and chemical fertilizers (MAP). The experiment will be carried out in a growth room maintained at a day/night temperature regime of 25/15°C, a relative humidity of 60%, and a corresponding photoperiod of 16 h. A total of three cropping (seeding to harvest) cycles need be completed, with each cycle at 50 d in duration. Harvested biomass must be weighed and oven-dried for 72 h at 60°C. The first cycle of growth Canola and Switchgrasses in the alum sludge compost, harvested at the day 50, oven dried, chopped into bits and fine ground in a mill grinder (< 0.2mm), and digested using the wet oxidation method in which plant tissue samples were digested with H2SO4 (99.7%) and H2O2 (30%) in an acid block digester. The digested plant samples need to be analyzed to measure the amount of total phosphorus.

Keywords: wastewater treatment, phosphorus removal, composting alum sludge, bioavailibility of pohosphorus

Procedia PDF Downloads 50
71 Buoyant Gas Dispersion in a Small Fuel Cell Enclosure: A Comparison Study Using Plain and Pressed Louvre Vent Passive Ventilation Schemes

Authors: T. Ghatauray, J. Ingram, P. Holborn

Abstract:

The transition from a ‘carbon rich’ fossil fuel dependent to a ‘sustainable’ and ‘renewable’ hydrogen based society will see the deployment of hydrogen fuel cells (HFC) in transport applications and in the generation of heat and power for buildings, as part of a decentralised power network. Many deployments will be low power HFCs for domestic combined heat and power (CHP) and commercial ‘transportable’ HFCs for environmental situations, such as lighting and telephone towers. For broad commercialisation of small fuel cells to be achieved there needs to be significant confidence in their safety in both domestic and environmental applications. Low power HFCs are housed in protective steel enclosures. Standard enclosures have plain rectangular ventilation openings intended for thermal management of electronics and not the dispersion of a buoyant gas. Degradation of the HFC or supply pipework in use could lead to a low-level leak and a build-up of hydrogen gas in the enclosure. Hydrogen’s wide flammable range (4-75%) is a significant safety concern, with ineffective enclosure ventilation having the potential to cause flammable mixtures to develop with the risk of explosion. Mechanical ventilation is effective at managing enclosure hydrogen concentrations, but drains HFC power and is vulnerable to failure. This is undesirable in low power and remote installations and reliable passive ventilation systems are preferred. Passive ventilation depends upon buoyancy driven flow, with the size, shape and position of ventilation openings critical for producing predictable flows and maintaining low buoyant gas concentrations. With environmentally sited enclosures, ventilation openings with pressed horizontal and angled louvres are preferred to protect the HFC and electronics inside. There is an economic cost to adding louvres, but also a safety concern. A question arises over whether the use of pressed louvre vents impairs enclosure passive ventilation performance, when compared to same opening area plain vents. Comparison small enclosure (0.144m³) tests of same opening area pressed louvre and plain vents were undertaken. A displacement ventilation arrangement was incorporated into the enclosure with opposing upper and lower ventilation openings. A range of vent areas were tested. Helium (used as a safe analogue for hydrogen) was released from a 4mm nozzle at the base of the enclosure to simulate a hydrogen leak at leak rates from 1 to 10 lpm. Helium sensors were used to record concentrations at eight heights in the enclosure. The enclosure was otherwise empty. These tests determined that the use of pressed and angled louvre ventilation openings on the enclosure impaired the passive ventilation flow and increased helium concentrations in the enclosure. High-level stratified buoyant gas layers were also found to be deeper than with plain vent openings and were within the flammable range. The presence of gas within the flammable range is of concern, particularly as the addition of the fuel cell and electronics in the enclosure would further reduce the available volume and increase concentrations. The opening area of louvre vents would need to be greater than equivalent plain vents to achieve comparable ventilation flows or alternative schemes would need to be considered.

Keywords: enclosure, fuel cell, helium, hydrogen safety, louvre vent, passive ventilation

Procedia PDF Downloads 249
70 Rheolaser: Light Scattering Characterization of Viscoelastic Properties of Hair Cosmetics That Are Related to Performance and Stability of the Respective Colloidal Soft Materials

Authors: Heitor Oliveira, Gabriele De-Waal, Juergen Schmenger, Lynsey Godfrey, Tibor Kovacs

Abstract:

Rheolaser MASTER™ makes use of multiple scattering of light, caused by scattering objects in a continuous medium (such as droplets and particles in colloids), to characterize the viscoelasticity of soft materials. It offers an alternative to conventional rheometers to characterize viscoelasticity of products such as hair cosmetics. Up to six simultaneous measurements at controlled temperature can be carried out simultaneously (10-15 min), and the method requires only minor sample preparation work. Conversely to conventional rheometer based methods, no mechanical stress is applied to the material during the measurements. Therefore, the properties of the exact same sample can be monitored over time, like in aging and stability studies. We determined the elastic index (EI) of water/emulsion mixtures (1 ≤ fat alcohols (FA) ≤ 5 wt%) and emulsion/gel-network mixtures (8 ≤ FA ≤ 17 wt%) and compared with the elastic/sorage mudulus (G’) for the respective samples using a TA conventional rheometer with flat plates geometry. As expected, it was found that log(EI) vs log(G’) presents a linear behavior. Moreover, log(EI) increased in a linear fashion with solids level in the entire range of compositions (1 ≤ FA ≤ 17 wt%), while rheometer measurements were limited to samples down to 4 wt% solids level. Alternatively, a concentric cilinder geometry would be required for more diluted samples (FA > 4 wt%) and rheometer results from different sample holder geometries are not comparable. The plot of the rheolaser output parameters solid-liquid balance (SLB) vs EI were suitable to monitor product aging processes. These data could quantitatively describe some observations such as formation of lumps over aging time. Moreover, this method allowed to identify that the different specifications of a key raw material (RM < 0.4 wt%) in the respective gel-network (GN) product has minor impact on product viscoelastic properties and it is not consumer perceivable after a short aging time. Broadening of a RM spec range typically has a positive impact on cost savings. Last but not least, the photon path length (λ*)—proportional to droplet size and inversely proportional to volume fraction of scattering objects, accordingly to the Mie theory—and the EI were suitable to characterize product destabilization processes (e.g., coalescence and creaming) and to predict product stability about eight times faster than our standard methods. Using these parameters we could successfully identify formulation and process parameters that resulted in unstable products. In conclusion, Rheolaser allows quick and reliable characterization of viscoelastic properties of hair cosmetics that are related to their performance and stability. It operates in a broad range of product compositions and has applications spanning from the formulation of our hair cosmetics to fast release criteria in our production sites. Last but not least, this powerful tool has positive impact on R&D development time—faster delivery of new products to the market—and consequently on cost savings.

Keywords: colloids, hair cosmetics, light scattering, performance and stability, soft materials, viscoelastic properties

Procedia PDF Downloads 143
69 Hydrodynamics in Wetlands of Brazilian Savanna: Electrical Tomography and Geoprocessing

Authors: Lucas M. Furlan, Cesar A. Moreira, Jepherson F. Sales, Guilherme T. Bueno, Manuel E. Ferreira, Carla V. S. Coelho, Vania Rosolen

Abstract:

Located in the western part of the State of Minas Gerais, Brazil, the study area consists of a savanna environment, represented by sedimentary plateau and a soil cover composed by lateritic and hydromorphic soils - in the latter, occurring the deferruginization and concentration of high-alumina clays, exploited as refractory material. In the hydromorphic topographic depressions (wetlands) the hydropedogical relationships are little known, but it is observed that in times of rainfall, the depressed region behaves like a natural seasonal reservoir - which suggests that the wetlands on the surface of the plateau are places of recharge of the aquifer. The aquifer recharge areas are extremely important for the sustainable social, economic and environmental development of societies. The understanding of hydrodynamics in relation to the functioning of the ferruginous and hydromorphic lateritic soils system in the savanna environment is a subject rarely explored in the literature, especially its understanding through the joint application of geoprocessing by UAV (Unmanned Aerial Vehicle) and electrical tomography. The objective of this work is to understand the hydrogeological dynamics in a wetland (with an area of 426.064 m²), in the Brazilian savanna,as well as the understanding of the subsurface architecture of hydromorphic depressions in relation to the recharge of aquifers. The wetland was compartmentalized in three different regions, according to the geoprocessing. Hydraulic conductivity studies were performed in each of these three portions. Electrical tomography was performed on 9 lines of 80 meters in length and spaced 10 meters apart (direction N45), and a line with 80 meters perpendicular to all others. With the data, it was possible to generate a 3D cube. The integrated analysis showed that the area behaves like a natural seasonal reservoir in the months of greater precipitation (December – 289mm; January – 277,9mm; February – 213,2mm), because the hydraulic conductivity is very low in all areas. In the aerial images, geotag correction of the images was performed, that is, the correction of the coordinates of the images by means of the corrected coordinates of the Positioning by Precision Point of the Brazilian Institute of Geography and Statistics (IBGE-PPP). Later, the orthomosaic and the digital surface model (DSM) were generated, which with specific geoprocessing generated the volume of water that the wetland can contain - 780,922m³ in total, 265,205m³ in the region with intermediate flooding and 49,140m³ in the central region, where a greater accumulation of water was observed. Through the electrical tomography it was possible to identify that up to the depth of 6 meters the water infiltrates vertically in the central region. From the 8 meters depth, the water encounters a more resistive layer and the infiltration begins to occur horizontally - tending to concentrate the recharge of the aquifer to the northeast and southwest of the wetland. The hydrodynamics of the area is complex and has many challenges in its understanding. The next step is to relate hydrodynamics to the evolution of the landscape, with the enrichment of high-alumina clays, and to propose a management model for the seasonal reservoir.

Keywords: electrical tomography, hydropedology, unmanned aerial vehicle, water resources management

Procedia PDF Downloads 113
68 How Obesity Sparks the Immune System and Lessons from the COVID-19 Pandemic

Authors: Husham Bayazed

Abstract:

Purpose of Presentation: Obesity and overweight are among the biggest health challenges of the 21st century, according to the WHO. Obviously, obese individuals suffer different courses of disease – from infections and allergies to cancer- and even respond differently to some treatment options. Of note, obesity often seems to predispose and triggers several secondary diseases such as diabetes, arteriosclerosis, or heart attacks. Since decades it seems that immunological signals gear inflammatory processes among obese individuals with the aforementioned conditions. This review aims to shed light how obesity sparks or rewire the immune system and predisposes to such unpleasant health outcomes. Moreover, lessons from the Covid-19 pandemic ascertain that people living with pre-existing conditions such as obesity can develop severe acute respiratory syndrome (SARS), which needs to be elucidated how obesity and its adjuvant inflammatory process distortion contribute to enhancing severe COVID-19 consequences. Recent Findings: In recent clinical studies, obesity was linked to alter and sparks the immune system in different ways. Adipose tissue (AT) is considered as a secondary immune organ, which is a reservoir of tissue-resident of different immune cells with mediator release, making it a secondary immune organ. Adipocytes per se secrete several pro-inflammatory cytokines (IL-6, IL-4, MCP-1, and TNF-α ) involved in activation of macrophages resulting in chronic low-grade inflammation. The correlation between obesity and T cells dysregulation is pivotal in rewiring the immune system. Of note, autophagy occurrence in adipose tissues further rewire the immune system due to flush and outburst of leptin and adiponectin, which are cytokines and influencing pro-inflammatory immune functions. These immune alterations among obese individuals are collectively incriminated in triggering several metabolic disorders and playing role in increasing cancers incidence and susceptibility to different infections. During COVID-19 pandemic, it was verified that patients with pre-existing obesity being at greater risk of suffering severe and fatal clinical outcomes. Beside obese people suffer from increased airway resistance and reduced lung volume, ACE2 expression in adipose tissue seems to be high and even higher than that in lungs, which spike infection incidence. In essence, obesity with pre-existence of pro-inflammatory cytokines such as LI-6 is a risk factor for cytokine storm and coagulopathy among COVID-19 patients. Summary: It is well documented that obesity is associated with chronic systemic low-grade inflammation, which sparks and alter different pillars of the immune system and triggers different metabolic disorders, and increases susceptibility of infections and cancer incidence. The pre-existing chronic inflammation in obese patients with the augmented inflammatory response against the viral infection seems to increase the susceptibility of these patients to developing severe COVID-19. Although the new weight loss drugs and bariatric surgery are considered as breakthrough news for obesity treatment, but preventing is easier than treating it once it has taken hold. However, obesity and immune system link new insights dispute the role of immunotherapy and regulating immune cells treating diet-induced obesity.

Keywords: immunity, metabolic disorders, cancer, COVID-19

Procedia PDF Downloads 48
67 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry

Authors: Dhanuj M. Gandikota

Abstract:

Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.

Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry

Procedia PDF Downloads 67
66 Tailoring Structural, Thermal and Luminescent Properties of Solid-State MIL-53(Al) MOF via Fe³⁺ Cation Exchange

Authors: T. Ul Rehman, S. Agnello, F. M. Gelardi, M. M. Calvino, G. Lazzara, G. Buscarino, M. Cannas

Abstract:

Metal-Organic Frameworks (MOFs) have emerged as promising candidates for detecting metal ions owing to their large surface area, customizable porosity, and diverse functionalities. In recent years, there has been a surge in research focused on MOFs with luminescent properties. These frameworks are constructed through coordinated bonding between metal ions and multi-dentate ligands, resulting in inherent fluorescent structures. Their luminescent behavior is influenced by factors like structural composition, surface morphology, pore volume, and interactions with target analytes, particularly metal ions. MOFs exhibit various sensing mechanisms, including photo-induced electron transfer (PET) and charge transfer processes such as ligand-to-metal (LMCT) and metal-to-ligand (MLCT) transitions. Among these, MIL-53(Al) stands out due to its flexibility, stability, and specific affinity towards certain metal ions, making it a promising platform for selective metal ion sensing. This study investigates the structural, thermal, and luminescent properties of MIL-53(Al) metal-organic framework (MOF) upon Fe3+ cation exchange. Two separate sets of samples were prepared to activate the MOF powder at different temperatures. The first set of samples, referred to as MIL-53(Al), activated (120°C), was prepared by activating the raw powder in a glass tube at 120°C for 12 hours and then sealing it. The second set of samples, referred to as MIL-53(Al), activated (300°C), was prepared by activating the MIL-53(Al) powder in a glass tube at 300°C for 70 hours. Additionally, 25 mg of MIL-53(Al) powder was dispersed in 5 mL of Fe3+ solution at various concentrations (0.1-100 mM) for the cation exchange experiment. The suspension was centrifuged for five minutes at 10,000 rpm to extract MIL-53(Al) powder. After three rounds of washing with ultrapure water, MIL-53(Al) powder was heated at 120°C for 12 hours. For PXRD and TGA analyses, a sample of the obtained MIL-53(Al) was used. We also activated the cation-exchanged samples for time-resolved photoluminescence (TRPL) measurements at two distinct temperatures (120 and 300°C) for comparative analysis. Powder X-ray diffraction patterns reveal amorphization in samples with higher Fe3+ concentrations, attributed to alterations in coordination environments and ion exchange dynamics. Thermal decomposition analysis shows reduced weight loss in Fe3+-exchanged MOFs, indicating enhanced stability due to stronger metal-ligand bonds and altered decomposition pathways. Raman spectroscopy demonstrates intensity decrease, shape disruption, and frequency shifts, indicative of structural perturbations induced by cation exchange. Photoluminescence spectra exhibit ligand-based emission (π-π* or n-π*) and ligand-to-metal charge transfer (LMCT), influenced by activation temperature and Fe3+ incorporation. Quenching of luminescence intensity and shorter lifetimes upon Fe3+ exchange result from structural distortions and Fe3+ binding to organic linkers. In a nutshell, this research underscores the complex interplay between composition, structure, and properties in MOFs, offering insights into their potential for diverse applications in catalysis, gas storage, and luminescent devices.

Keywords: Fe³⁺ cation exchange, luminescent metal-organic frameworks (LMOFs), MIL-53(Al), solid-state analysis

Procedia PDF Downloads 25
65 Estimated Heat Production, Blood Parameters and Mitochondrial DNA Copy Number of Nellore Bulls with High and Low Residual Feed Intake

Authors: Welder A. Baldassini, Jon J. Ramsey, Marcos R. Chiaratti, Amália S. Chaves, Renata H. Branco, Sarah F. M. Bonilha, Dante P. D. Lanna

Abstract:

With increased production costs there is a need for animals that are more efficient in terms of meat production. In this context, the role of mitochondrial DNA (mtDNA) on physiological processes in liver, muscle and adipose tissues may account for inter-animal variation in energy expenditures and heat production. The purpose this study was to investigate if the amounts of mtDNA in liver, muscle and adipose tissue (subcutaneous and visceral depots) of Nellore bulls are associated with residual feed intake (RFI) and estimated heat production (EHP). Eighteen animals were individually fed in a feedlot for 90 days. RFI values were obtained by regression of dry matter intake (DMI) in relation to average daily gain (ADG) and mid-test metabolic body weight (BW). The animals were classified into low (more efficient) and high (less efficient) RFI groups. The bulls were then randomly distributed in individual pens where they were given excess feed twice daily to result in 5 to 10% orts for 90 d with diet containing 15% crude protein and 2.7 Mcal ME/kg DM. The heart rate (HR) of bulls was monitored for 4 consecutive days and used for calculation of EHP. Electrodes were fitted to bulls with stretch belts (POLAR RS400; Kempele, Finland). To calculate oxygen pulse (O2P), oxygen consumption was obtained using a facemask connected to the gas analyzer (EXHALYZER, ECOMedics, Zurich, Switzerland) and HR were simultaneously measured for 15 minutes period. Daily oxygen (O2) consumption was calculated by multiplying the volume of O2 per beat by total daily beats. EHP was calculated multiplying O2P by the average HR obtained during the 4 days, assuming 4.89 kcal/L of O2 to measure daily EHP that was expressed in kilocalories/day/kilogram metabolic BW (kcal/day/kg BW0.75). Blood samples were collected between days 45 and 90th after the beginning of the trial period in order to measure the concentration of hemoglobin and hematocrit. The bulls were slaughtered in an experimental slaughter house in accordance with current guidelines. Immediately after slaughter, a section of liver, a portion of longissimus thoracis (LT) muscle, plus a portion of subcutaneous fat (surrounding LT muscle) and portions of visceral fat (kidney, pelvis and inguinal fat) were collected. Samples of liver, muscle and adipose tissues were used to quantify mtDNA copy number per cell. The number of mtDNA copies was determined by normalization of mtDNA amount against a single copy nuclear gene (B2M). Mean of EHP, hemoglobin and hematocrit of high and low RFI bulls were compared using two-sample t-tests. Additionally, the one-way ANOVA was used to compare mtDNA quantification considering the mains effects of RFI groups. We found lower EHP (83.047 vs. 97.590 kcal/day/kgBW0.75; P < 0.10), hemoglobin concentration (13.533 vs. 15.108 g/dL; P < 0.10) and hematocrit percentage (39.3 vs. 43.6 %; P < 0.05) in low compared to high RFI bulls, respectively, which may be useful traits to identify efficient animals. However, no differences were observed between the mtDNA content in liver, muscle and adipose tissue of Nellore bulls with high and low RFI.

Keywords: bioenergetics, Bos indicus, feed efficiency, mitochondria

Procedia PDF Downloads 219
64 Heat Transfer Modeling of 'Carabao' Mango (Mangifera indica L.) during Postharvest Hot Water Treatments

Authors: Hazel James P. Agngarayngay, Arnold R. Elepaño

Abstract:

Mango is the third most important export fruit in the Philippines. Despite the expanding mango trade in world market, problems on postharvest losses caused by pests and diseases are still prevalent. Many disease control and pest disinfestation methods have been studied and adopted. Heat treatment is necessary to eliminate pests and diseases to be able to pass the quarantine requirements of importing countries. During heat treatments, temperature and time are critical because fruits can easily be damaged by over-exposure to heat. Modeling the process enables researchers and engineers to study the behaviour of temperature distribution within the fruit over time. Understanding physical processes through modeling and simulation also saves time and resources because of reduced experimentation. This research aimed to simulate the heat transfer mechanism and predict the temperature distribution in ‘Carabao' mangoes during hot water treatment (HWT) and extended hot water treatment (EHWT). The simulation was performed in ANSYS CFD Software, using ANSYS CFX Solver. The simulation process involved model creation, mesh generation, defining the physics of the model, solving the problem, and visualizing the results. Boundary conditions consisted of the convective heat transfer coefficient and a constant free stream temperature. The three-dimensional energy equation for transient conditions was numerically solved to obtain heat flux and transient temperature values. The solver utilized finite volume method of discretization. To validate the simulation, actual data were obtained through experiment. The goodness of fit was evaluated using mean temperature difference (MTD). Also, t-test was used to detect significant differences between the data sets. Results showed that the simulations were able to estimate temperatures accurately with MTD of 0.50 and 0.69 °C for the HWT and EHWT, respectively. This indicates good agreement between the simulated and actual temperature values. The data included in the analysis were taken at different locations of probe punctures within the fruit. Moreover, t-tests showed no significant differences between the two data sets. Maximum heat fluxes obtained at the beginning of the treatments were 394.15 and 262.77 J.s-1 for HWT and EHWT, respectively. These values decreased abruptly at the first 10 seconds and gradual decrease was observed thereafter. Data on heat flux is necessary in the design of heaters. If underestimated, the heating component of a certain machine will not be able to provide enough heat required by certain operations. Otherwise, over-estimation will result in wasting of energy and resources. This study demonstrated that the simulation was able to estimate temperatures accurately. Thus, it can be used to evaluate the influence of various treatment conditions on the temperature-time history in mangoes. When combined with information on insect mortality and quality degradation kinetics, it could predict the efficacy of a particular treatment and guide appropriate selection of treatment conditions. The effect of various parameters on heat transfer rates, such as the boundary and initial conditions as well as the thermal properties of the material, can be systematically studied without performing experiments. Furthermore, the use of ANSYS software in modeling and simulation can be explored in modeling various systems and processes.

Keywords: heat transfer, heat treatment, mango, modeling and simulation

Procedia PDF Downloads 226