Search results for: green architecture
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3666

Search results for: green architecture

276 A Microwave and Millimeter-Wave Transmit/Receive Switch Subsystem for Communication Systems

Authors: Donghyun Lee, Cam Nguyen

Abstract:

Multi-band systems offer a great deal of benefit in modern communication and radar systems. In particular, multi-band antenna-array radar systems with their extended frequency diversity provide numerous advantages in detection, identification, locating and tracking a wide range of targets, including enhanced detection coverage, accurate target location, reduced survey time and cost, increased resolution, improved reliability and target information. An accurate calibration is a critical issue in antenna array systems. The amplitude and phase errors in multi-band and multi-polarization antenna array transceivers result in inaccurate target detection, deteriorated resolution and reduced reliability. Furthermore, the digital beam former without the RF domain phase-shifting is less immune to unfiltered interference signals, which can lead to receiver saturation in array systems. Therefore, implementing integrated front-end architecture, which can support calibration function with low insertion and filtering function from the farthest end of an array transceiver is of great interest. We report a dual K/Ka-band T/R/Calibration switch module with quasi-elliptic dual-bandpass filtering function implementing a Q-enhanced metamaterial transmission line. A unique dual-band frequency response is incorporated in the reception and calibration path of the proposed switch module utilizing the composite right/left-handed meta material transmission line coupled with a Colpitts-style negative generation circuit. The fabricated fully integrated T/R/Calibration switch module in 0.18-μm BiCMOS technology exhibits insertion loss of 4.9-12.3 dB and isolation of more than 45 dB in the reception, transmission and calibration mode of operation. In the reception and calibration mode, the dual-band frequency response centered at 24.5 and 35 GHz exhibits out-of-band rejection of more than 30 dB compared to the pass bands below 10.5 GHz and above 59.5 GHz. The rejection between the pass bands reaches more than 50 dB. In all modes of operation, the IP1-dB is between 4 and 11 dBm. Acknowledgement: This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.

Keywords: microwaves, millimeter waves, T/R switch, wireless communications, wireless communications

Procedia PDF Downloads 139
275 Scalable UI Test Automation for Large-scale Web Applications

Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani

Abstract:

This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.

Keywords: aws, elastic container service, scalability, serverless, ui automation test

Procedia PDF Downloads 68
274 The Influence of Salt Body of J. Ech Cheid on the Maturity History of the Cenomanian: Turonian Source Rock

Authors: Mohamed Malek Khenissi, Mohamed Montassar Ben Slama, Anis Belhaj Mohamed, Moncef Saidi

Abstract:

Northern Tunisia is well known by its different and complex structural and geological zones that have been the result of a geodynamic history that extends from the early Mesozoic era to the actual period. One of these zones is the salt province, where the Halokinesis process is manifested by a number of NE/SW salt structures such as Jebel Ech-Cheid which represents masses of materials characterized by a high plasticity and low density. The salt masses extrusions that have been developed due to an extension that started from the late Triassic to late Cretaceous. The evolution of salt bodies within sedimentary basins have not only contributed to modify the architecture of the basin, but it also has certain geochemical effects which touch mainly source rocks that surround it. It has been demonstrated that the presence of salt structures within sedimentary basins can influence its temperature distribution and thermal history. Moreover, it has been creating heat flux anomalies that may affect the maturity of organic matter and the timing of hydrocarbon generation. Field samples of the Bahloul source rock (Cenomanan-Tunonian) were collected from different sights from all around Ech Cheid salt structure and evaluated using Rock-eval pyrolysis and GC/MS techniques in order to assess the degree of maturity evolution and the heat flux anomalies in the different zones analyze. The Total organic Carbon (TOC) values range between 1 to 9% and the (Tmax) ranges between 424 and 445°C, also the distribution of the source rock biomarkers both saturated and aromatic changes in a regular fashions with increasing maturity and this are shown in the chromatography results such as Ts/(Ts+Tm) ratios, 22S/(22S+22R) values for C31 homohopanes, ββ/(ββ+αα)20R and 20S/(20S+20R) ratios for C29 steranes which gives a consistent maturity indications and assessment of the field samples. These analyses are carried to interpret the maturity evolution and the heat flux around Ech Cheid salt structure through the geological history. These analyses also aim to demonstrate that the salt structure can have a direct effect on the geothermal gradient of the basin and on the maturity of the Bahloul Formation source rock. The organic matter has reached different stages of thermal maturity, but delineate a general increasing maturity trend. Our study confirms that the J. Ech Cheid salt body have on the first hand: a huge influence on the local distribution of anoxic depocentre at least within Cenomanian-Turonian time. In the second hand, the thermal anomaly near the salt mass has affected the maturity of Bahloul Formation.

Keywords: Bahloul formation, depocentre, GC/MS, rock-eval

Procedia PDF Downloads 218
273 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets using an OpenScience Energy System Optimization Model

Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is be clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results is ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.

Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA

Procedia PDF Downloads 44
272 The Possible Interaction between Bisphenol A, Caffeine and Epigallocatechin-3-Gallate on Neurotoxicity Induced by Manganese in Rats

Authors: Azza A. Ali, Hebatalla I. Ahmed, Asmaa Abdelaty

Abstract:

Background: Manganese (Mn) is a naturally occurring element. Exposure to high levels of Mn causes neurotoxic effects and represents an environmental risk factor. Mn neurotoxicity is poorly understood but changing of AChE activity, monoamines and oxidative stress has been established. Bisphenol A (BPA) is a synthetic compound widely used in the production of polycarbonate plastics. There is considerable debate about whether its exposure represents an environmental risk. Caffeine is one of the major contributors to the dietary antioxidants which prevent oxidative damage and may reduce the risk of chronic neurodegenerative diseases. Epigallocatechin-3-gallate is another major component of green tea and has known interactions with caffeine. It also has health-promoting effects in CNS. Objective: To evaluate the potential protective effects of Caffeine and/or EGCG against Mn-induced neurotoxicity either alone or in the presence of BPA in rats. Methods: Seven groups of rats were used and received daily for 5 weeks MnCl2.4H2O (10 mg/kg, IP) except the control group which received saline, corn oil and distilled H2O. Mn was injected either alone or in combination with each of the following: BPA (50 mg/kg, PO), caffeine (10 mg/kg, PO), EGCG (5 mg/kg, IP), caffeine + EGCG and BPA +caffeine +EGCG. All rats were examined in five behavioral tests (grid, bar, swimming, open field and Y- maze tests). Biochemical changes in monoamines, caspase-3, PGE2, GSK-3B, glutamate, acetyl cholinesterase and oxidative parameters, as well as histopathological changes in the brain, were also evaluated for all groups. Results: Mn significantly increased MDA and nitrite content as well as caspase-3, GSK-3B, PGE2 and glutamate levels while significantly decreased TAC and SOD as well as cholinesterase in the striatum. It also decreased DA, NE and 5-HT levels in the striatum and frontal cortex. BPA together with Mn enhanced oxidative stress generation induced by Mn while increased monoamine content that was decreased by Mn in rat striatum. BPA abolished neuronal degeneration induced by Mn in the hippocampus but not in the substantia nigra, striatum and cerebral cortex. Behavioral examinations showed that caffeine and EGCG co-administration had more pronounced protective effect against Mn-induced neurotoxicity than each one alone. EGCG alone or in combination with caffeine prevented neuronal degeneration in the substantia nigra, striatum, hippocampus and cerebral cortex induced by Mn while caffeine alone prevented neuronal degeneration in the substantia nigra and striatum but still showed some nuclear pyknosis in cerebral cortex and hippocampus. The marked protection of caffeine and EGCG co-administration also confirmed by the significant increase in TAC, SOD, ACHE, DA, NE and 5-HT as well as the decrease in MDA, nitrite, caspase-3, PGE2, GSK-3B, the glutamic acid in the striatum. Conclusion: Neuronal degeneration induced by Mn showed some inhibition with BPA exposure despite the enhancement in oxidative stress generation. Co-administration of EGCG and caffeine can protect against neuronal degeneration induced by Mn and improve behavioral deficits associated with its neurotoxicity. The protective effect of EGCG was more pronounced than that of caffeine even with BPA co-exposure.

Keywords: manganese, bisphenol a, caffeine, epigallocatechin-3-gallate, neurotoxicity, behavioral tests, rats

Procedia PDF Downloads 198
271 Optical and Structural Characterization of Rare Earth Doped Phosphate Glasses

Authors: Zélia Maria Da Costa Ludwig, Maria José Valenzuela Bell, Geraldo Henriques Da Silva, Thales Alves Faraco, Victor Rocha Da Silva, Daniel Rotmeister Teixeira, Vírgilio De Carvalho Dos Anjos, Valdemir Ludwig

Abstract:

Advances in telecommunications grow with the development of optical amplifiers based on rare earth ions. The focus has been concentrated in silicate glasses although their amplified spontaneous emission is limited to a few tens of nanometers (~ 40nm). Recently, phosphate glasses have received great attention due to their potential application in optical data transmission, detection, sensors and laser detector, waveguide and optical fibers, besides its excellent physical properties such as high thermal expansion coefficients and low melting temperature. Compared with the silica glasses, phosphate glasses provide different optical properties such as, large transmission window of infrared, and good density. Research on the improvement of physical and chemical durability of phosphate glass by addition of heavy metals oxides in P2O5 has been performed. The addition of Na2O further improves the solubility of rare earths, while increasing the Al2O3 links in the P2O5 tetrahedral results in increased durability and aqueous transition temperature and a decrease of the coefficient of thermal expansion. This work describes the structural and spectroscopic characterization of a phosphate glass matrix doped with different Er (Erbium) concentrations. The phosphate glasses containing Er3+ ions have been prepared by melt technique. A study of the optical absorption, luminescence and lifetime was conducted in order to characterize the infrared emission of Er3+ ions at 1540 nm, due to the radiative transition 4I13/2 → 4I15/2. Our results indicate that the present glass is a quite good matrix for Er3+ ions, and the quantum efficiency of the 1540 nm emission was high. A quenching mechanism for the mentioned luminescence was not observed up to 2,0 mol% of Er concentration. The Judd-Ofelt parameters, radiative lifetime and quantum efficiency have been determined in order to evaluate the potential of Er3+ ions in new phosphate glass. The parameters follow the trend as Ω2 > Ω4 > Ω6. It is well known that the parameter Ω2 is an indication of the dominant covalent nature and/or structural changes in the vicinity of the ion (short range effects), while Ω4 and Ω6 intensity parameters are long range parameters that can be related to the bulk properties such as viscosity and rigidity of the glass. From the PL measurements, no red or green upconversion was measured when pumping the samples with laser excitation at 980 nm. As future prospects: Synthesize this glass system with silver in order to determine the influence of silver nanoparticles on the Er3+ ions.

Keywords: phosphate glass, erbium, luminescence, glass system

Procedia PDF Downloads 489
270 Preparation and Characterization of Poly(L-Lactic Acid)/Oligo(D-Lactic Acid) Grafted Cellulose Composites

Authors: Md. Hafezur Rahaman, Mohd. Maniruzzaman, Md. Shadiqul Islam, Md. Masud Rana

Abstract:

With the growth of environmental awareness, enormous researches are running to develop the next generation materials based on sustainability, eco-competence, and green chemistry to preserve and protect the environment. Due to biodegradability and biocompatibility, poly (L-lactic acid) (PLLA) has a great interest in ecological and medical applications. Also, cellulose is one of the most abundant biodegradable, renewable polymers found in nature. It has several advantages such as low cost, high mechanical strength, biodegradability and so on. Recently, an immense deal of attention has been paid for the scientific and technological development of α-cellulose based composite material. PLLA could be used for grafting of cellulose to improve the compatibility prior to the composite preparation. Here it is quite difficult to form a bond between lower hydrophilic molecules like PLLA and α-cellulose. Dimmers and oligomers can easily be grafted onto the surface of the cellulose by ring opening or polycondensation method due to their low molecular weight. In this research, α-cellulose extracted from jute fiber is grafted with oligo(D-lactic acid) (ODLA) via graft polycondensation reaction in presence of para-toluene sulphonic acid and potassium persulphate in toluene at 130°C for 9 hours under 380 mmHg. Here ODLA is synthesized by ring opening polymerization of D-lactides in the presence of stannous octoate (0.03 wt% of lactide) and D-lactic acids at 140°C for 10 hours. Composites of PLLA with ODLA grafted α-cellulose are prepared by solution mixing and film casting method. Confirmation of grafting was carried out through FTIR spectroscopy and SEM analysis. A strongest carbonyl peak of FTIR spectroscopy at 1728 cm⁻¹ of ODLA grafted α-cellulose confirms the grafting of ODLA onto α-cellulose which is absent in α-cellulose. It is also observed from SEM photographs that there are some white areas (spot) on ODLA grafted α-cellulose as compared to α-cellulose may indicate the grafting of ODLA and consistent with FTIR results. Analysis of the composites is carried out by FTIR, SEM, WAXD and thermal gravimetric analyzer. Most of the FTIR characteristic absorption peak of the composites shifted to higher wave number with increasing peak area may provide a confirmation that PLLA and grafted cellulose have better compatibility in composites via intermolecular hydrogen bonding and this supports previously published results. Grafted α-cellulose distributions in composites are uniform which is observed by SEM analysis. WAXD studied show that only homo-crystalline structures of PLLA present in the composites. Thermal stability of the composites is enhanced with increasing the percentages of ODLA grafted α-cellulose in composites. As a consequence, the resultant composites have a resistance toward the thermal degradation. The effects of length of the grafted chain and biodegradability of the composites will be studied in further research.

Keywords: α-cellulose, composite, graft polycondensation, oligo(D-lactic acid), poly(L-lactic acid)

Procedia PDF Downloads 97
269 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 358
268 Gender Construction in Contemporary Dystopian Fiction in Young Adult Literature: A South African Example

Authors: Johan Anker

Abstract:

The purpose of this paper is to discuss the nature of gender construction in modern dystopian fiction, the development of this genre in Young Adult Literature and reasons for the enormous appeal on the adolescent readers. A recent award winning South African text in this genre, The Mark by Edith Bullring (2014), will be used as example while also comparing this text to international bestsellers like Divergent (Roth:2011), The Hunger Games (Collins:2008) and others. Theoretical insights from critics and academics in the field of children’s literature, like Ames, Coats, Bradford, Booker, Basu, Green-Barteet, Hintz, McAlear, McCallum, Moylan, Ostry, Ryan, Stephens and Westerfield will be referred to and their insights used as part of the analysis of The Mark. The role of relevant and recurring themes in this genre, like global concerns, environmental destruction, liberty, self-determination, social and political critique, surveillance and repression by the state or other institutions will also be referred to. The paper will shortly refer to the history and emergence of dystopian literature as genre in adult and young adult literature as part of the long tradition since the publishing of Orwell’s 1984 and Huxley’s Brave New World. Different factors appeal to adolescent readers in the modern versions of this hybrid genre for young adults: teenage protagonists who are questioning the underlying values of a flawed society like an inhuman or tyrannical government, a growing understanding of the society around them, feelings of isolation and the dynamic of relationships. This unease leads to a growing sense of the potential to act against society (rebellion), and of their role as agents in a larger community and independent decision-making abilities. This awareness also leads to a growing sense of self (identity and agency) and the development of romantic relationships. The specific modern tendency of a female protagonist as leader in the rebellion against state and state apparatus, who gains in agency and independence in this rebellion, an important part of the identification with and construction of gender, while being part of the traditional coming-of-age young adult novel will be emphasized. A comparison between the traditional themes, structures and plots of young adult literature (YAL) with adult dystopian literature and those of recent dystopian YAL will be made while the hybrid nature of this genre and the 'sense of unease' but also of hope, as an essential part of youth literature, in the closure to these novels will be discussed. Important questions about the role of the didactic nature of these texts and the political issues and the importance of the formation of agency and identity for the young adult reader, as well as identification with the protagonists in this genre, are also part of this discussion of The Mark and other YAL novels.

Keywords: agency, dystopian literature, gender construction, young adult literature

Procedia PDF Downloads 157
267 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts

Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira

Abstract:

In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.

Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design

Procedia PDF Downloads 90
266 Sustainable Solid Waste Management Solutions for Asian Countries Using the Potential in Municipal Solid Waste of Indian Cities

Authors: S. H. Babu Gurucharan, Priyanka Kaushal

Abstract:

Majority of the world's population is expected to live in the Asia and Pacific region by 2050 and thus their cities will generate the maximum waste. India, being the second populous country in the world, is an ideal case study to identify a solution for Asian countries. Waste minimisation and utilisation have always been part of the Indian culture. During rapid urbanisation, our society lost the art of waste minimisation and utilisation habits. Presently, Waste is not considered as a resource, thus wasting an opportunity to tap resources. The technologies in vogue are not suited for effective treatment of large quantities of generated solid waste, without impacting the environment and the population. If not treated efficiently, Waste can become a silent killer. The article is trying to highlight the Indian municipal solid waste scenario as a key indicator of Asian waste management and recommend sustainable waste management and suggest effective solutions to treat the Solid Waste. The methods followed during the research were to analyse the solid waste data on characteristics of solid waste generated in Indian cities, then evaluate the current technologies to identify the most suitable technology in Indian conditions with minimal environmental impact, interact with the technology technical teams, then generate a technical process specific to Indian conditions and further examining the environmental impact and advantages/ disadvantages of the suggested process. The most important finding from the study was the recognition that most of the current municipal waste treatment technologies being employed, operate sub-optimally in Indian conditions. Therefore, the study using the available data, generated heat and mass balance of processes to arrive at the final technical process, which was broadly divided into Waste processing, Waste Treatment, Power Generation, through various permutations and combinations at each stage to ensure that the process is techno-commercially viable in Indian conditions. Then environmental impact was arrived through secondary sources and a comparison of environmental impact of different technologies was tabulated. The major advantages of the suggested process are the effective use of waste for resource generation both in terms of maximised power output or conversion to eco-friendly products like biofuels or chemicals using advanced technologies, minimum environmental impact and the least landfill requirement. The major drawbacks are the capital, operations and maintenance costs. The existing technologies in use in Indian municipalities have their own limitations and the shortlisted technology is far superior to other technologies in vogue. Treatment of Municipal Solid Waste with an efficient green power generation is possible through a combination of suitable environment-friendly technologies. A combination of bio-reactors and plasma-based gasification technology is most suitable for Indian Waste and in turn for Asian waste conditions.

Keywords: calorific value, gas fermentation, landfill, municipal solid waste, plasma gasification, syngas

Procedia PDF Downloads 161
265 Investigating Seasonal Changes of Urban Land Cover with High Spatio-Temporal Resolution Satellite Data via Image Fusion

Authors: Hantian Wu, Bo Huang, Yuan Zeng

Abstract:

Divisions between wealthy and poor, private and public landscapes are propagated by the increasing economic inequality of cities. While these are the spatial reflections of larger social issues and problems, urban design can at least employ spatial techniques that promote more inclusive rather than exclusive, overlapping rather than segregated, interlinked rather than disconnected landscapes. Indeed, the type of edge or border between urban landscapes plays a critical role in the way the environment is perceived. China experiences rapid urbanization, which poses unpredictable environmental challenges. The urban green cover and water body are under changes, which highly relevant to resident wealth and happiness. However, very limited knowledge and data on their rapid changes are available. In this regard, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understating the driving forces of urban landscape changes can be a significant contribution for urban planning and studying. High-resolution remote sensing data has been widely applied to urban management in China. The map of urban land use map for the entire China of 2018 with 10 meters resolution has been published. However, this research focuses on the large-scale and high-resolution remote sensing land use but does not precisely focus on the seasonal change of urban covers. High-resolution remote sensing data has a long-operation cycle (e.g., Landsat 8 required 16 days for the same location), which is unable to satisfy the requirement of monitoring urban-landscape changes. On the other hand, aerial-remote or unmanned aerial vehicle (UAV) sensing are limited by the aviation-regulation and cost was hardly widely applied in the mega-cities. Moreover, those data are limited by the climate and weather conditions (e.g., cloud, fog), and those problems make capturing spatial and temporal dynamics is always a challenge for the remote sensing community. Particularly, during the rainy season, no data are available even for Sentinel Satellite data with 5 days interval. Many natural events and/or human activities drive the changes of urban covers. In this case, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understanding the mechanism of urban landscape changes can be a significant contribution for urban planning and studying. This project aims to use the high spatiotemporal fusion of remote sensing data to create short-cycle, high-resolution remote sensing data sets for exploring the high-frequently urban cover changes. This research will enhance the long-term monitoring applicability of high spatiotemporal fusion of remote sensing data for the urban landscape for optimizing the urban management of landscape border to promoting the inclusive of the urban landscape to all communities.

Keywords: urban land cover changes, remote sensing, high spatiotemporal fusion, urban management

Procedia PDF Downloads 89
264 MOF [(4,4-Bipyridine)₂(O₂CCH₃)₂Zn]N as Heterogeneous Acid Catalysts for the Transesterification of Canola Oil

Authors: H. Arceo, S. Rincon, C. Ben-Youssef, J. Rivera, A. Zepeda

Abstract:

Biodiesel has emerged as a material with great potential as a renewable energy replacement to current petroleum-based diesel. Recently, biodiesel production is focused on the development of more efficient, sustainable process with lower costs of production. In this sense, a “green” approach to biodiesel production has stimulated the use of sustainable heterogeneous acid catalysts, that are better alternatives to conventional processes because of their simplicity and the simultaneous promotion of esterification and transesterification reactions from low-grade, highly-acidic and water containing oils without the formation of soap. The focus of this methodology is the development of new heterogeneous catalysts that under ordinary reaction conditions could reach yields similar to homogeneous catalysis. In recent years, metal organic frameworks (MOF) have attracted much interest for their potential as heterogeneous acid catalysts. They are crystalline porous solids formed by association of transition metal ions or metal–oxo clusters and polydentate organic ligands. This hybridization confers MOFs unique features such as high thermal stability, larger pore size, high specific area, high selectivity and recycling potential. Thus, MOF application could be a way to improve the biodiesel production processes. In this work, we evaluated the catalytic activity of MOF [(4,4-bipyridine)2(O₂CCH₃)2Zn]n (MOF Zn-I) for the synthesis of biodiesel from canola oil. The reaction conditions were optimized using the response surface methodology with a compound design central with 24. The variables studied were: Reaction temperature, amount of catalyst, molar ratio oil: MetOH and reaction time. The preparation MOF Zn-I was performed by mixing 5 mmol 4´4 dipyridine dissolved in 25 mL methanol with 10 mmol Zn(O₂CCH₃)₂ ∙ 2H₂O in 25 mL water. The crystals were obtained by slow evaporation of the solvents at 60°C for 18 h. The prepared catalyst was characterized using X-ray diffraction (XRD) and Fourier transform infrared spectrometer (FT-IR). The prepared catalyst was characterized using X-ray diffraction (XRD) and Fourier transform infrared spectrometer (FT-IR). Experiments were performed using commercially available canola oil in ace pressure tube under continuous stirring. The reaction was filtered and vacuum distilled to remove the catalyst and excess alcohol, after which it was centrifuged to separate the obtained biodiesel and glycerol. 1H NMR was used to calculate the process yield. GC-MS was used to quantify the fatty acid methyl ester (FAME). The results of this study show that the acid catalyst MOF Zn-I could be used as catalyst for biodiesel production through heterogeneous transesterification of canola oil with FAME yield 82 %. The optimum operating condition for the catalytic reaction were of 142°C, 0.5% catalyst/oil weight ratio, 1:30 oil:MeOH molar ratio and 5 h reaction time.

Keywords: fatty acid methyl ester, heterogeneous acid catalyst, metal organic framework, transesterification

Procedia PDF Downloads 258
263 Design Approach to Incorporate Unique Performance Characteristics of Special Concrete

Authors: Devendra Kumar Pandey, Debabrata Chakraborty

Abstract:

The advancement in various concrete ingredients like plasticizers, additives and fibers, etc. has enabled concrete technologists to develop many viable varieties of special concretes in recent decades. Such various varieties of concrete have significant enhancement in green as well as hardened properties of concrete. A prudent selection of appropriate type of concrete can resolve many design and application issues in construction projects. This paper focuses on usage of self-compacting concrete, high early strength concrete, structural lightweight concrete, fiber reinforced concrete, high performance concrete and ultra-high strength concrete in the structures. The modified properties of strength at various ages, flowability, porosity, equilibrium density, flexural strength, elasticity, permeability etc. need to be carefully studied and incorporated into the design of the structures. The paper demonstrates various mixture combinations and the concrete properties that can be leveraged. The selection of such products based on the end use of structures has been proposed in order to efficiently utilize the modified characteristics of these concrete varieties. The study involves mapping the characteristics with benefits and savings for the structure from design perspective. Self-compacting concrete in the structure is characterized by high shuttering loads, better finish, and feasibility of closer reinforcement spacing. The structural design procedures can be modified to specify higher formwork strength, height of vertical members, cover reduction and increased ductility. The transverse reinforcement can be spaced at closer intervals compared to regular structural concrete. It allows structural lightweight concrete structures to be designed for reduced dead load, increased insulation properties. Member dimensions and steel requirement can be reduced proportionate to about 25 to 35 percent reduction in the dead load due to self-weight of concrete. Steel fiber reinforced concrete can be used to design grade slabs without primary reinforcement because of 70 to 100 percent higher tensile strength. The design procedures incorporate reduction in thickness and joint spacing. High performance concrete employs increase in the life of the structures by improvement in paste characteristics and durability by incorporating supplementary cementitious materials. Often, these are also designed for slower heat generation in the initial phase of hydration. The structural designer can incorporate the slow development of strength in the design and specify 56 or 90 days strength requirement. For designing high rise building structures, creep and elasticity properties of such concrete also need to be considered. Lastly, certain structures require a performance under loading conditions much earlier than final maturity of concrete. High early strength concrete has been designed to cater to a variety of usages at various ages as early as 8 to 12 hours. Therefore, an understanding of concrete performance specifications for special concrete is a definite door towards a superior structural design approach.

Keywords: high performance concrete, special concrete, structural design, structural lightweight concrete

Procedia PDF Downloads 284
262 Analog Railway Signal Object Controller Development

Authors: Ercan Kızılay, Mustafa Demi̇rel, Selçuk Coşkun

Abstract:

Railway signaling systems consist of vital products that regulate railway traffic and provide safe route arrangements and maneuvers of trains. SIL 4 signal lamps are produced by many manufacturers today. There is a need for systems that enable these signal lamps to be controlled by commands from the interlocking. These systems should act as fail-safe and give error indications to the interlocking system when an unexpected situation occurs for the safe operation of railway systems from the RAMS perspective. In the past, driving and proving the lamp in relay-based systems was typically done via signaling relays. Today, the proving of lamps is done by comparing the current values read over the return circuit, the lower and upper threshold values. The purpose is an analog electronic object controller with the possibility of easy integration with vital systems and the signal lamp itself. During the study, the EN50126 standard approach was considered, and the concept, definition, risk analysis, requirements, architecture, design, and prototyping were performed throughout this study. FMEA (Failure Modes and Effects Analysis) and FTA (Fault Tree) Analysis) have been used for safety analysis in accordance with EN 50129. Concerning these analyzes, the 1oo2D reactive fail-safe hardware design of a controller has been researched. Electromagnetic compatibility (EMC) effects on the functional safety of equipment, insulation coordination, and over-voltage protection were discussed during hardware design according to EN 50124 and EN 50122 standards. As vital equipment for railway signaling, railway signal object controllers should be developed according to EN 50126 and EN 50129 standards which identify the steps and requirements of the development in accordance with the SIL 4(Safety Integrity Level) target. In conclusion of this study, an analog railway signal object controller, which takes command from the interlocking system, is processed in driver cards. Driver cards arrange the voltage level according to desired visibility by means of semiconductors. Additionally, prover cards evaluate the current upper and lower thresholds. Evaluated values are processed via logic gates which are composed as 1oo2D by means of analog electronic technologies. This logic evaluates the voltage level of the lamp and mitigates the risks of undue dimming.

Keywords: object controller, railway electronic, analog electronic, safety, railway signal

Procedia PDF Downloads 66
261 Municipal Action Against Urbanisation-Induced Warming: Case Studies from Jordan, Zambia, and Germany

Authors: Muna Shalan

Abstract:

Climate change is a systemic challenge for cities, with its impacts not happening in isolation but rather intertwined, thus increasing hazards and the vulnerability of the exposed population. The increase in the frequency and intensity of heat waves, for example, is associated with multiple repercussions on the quality of life of city inhabitants, including health discomfort, a rise in mortality and morbidity, increasing energy demand for cooling, and shrinking of green areas due to drought. To address the multi-faceted impact of urbanisation-induced warming, municipalities and local governments are challenged with devising strategies and implementing effective response measures. Municipalities are recognising the importance of guiding urban concepts to drive climate action in the urban environment. An example is climate proofing, which refers to a process of mainstreaming climate change into development strategies and programs, i.e., urban planning is viewed through a climate change lens. There is a multitude of interconnected aspects that are critical to paving the path toward climate-proofing of urban areas and avoiding poor planning of layouts and spatial arrangements. Navigating these aspects through an analysis of the overarching practices governing municipal planning processes, which is the focus of this research, will highlight entry points to improve procedures, methods, and data availability for optimising planning processes and municipal actions. By employing a case study approach, the research investigates how municipalities in different contexts, namely in the city of Sahab in Jordan, Chililabombwe in Zambia, and the city of Dortmund in Germany, are integrating guiding urban concepts to shrink the deficit in adaptation and mitigation and achieve climate proofing goals in their respective local contexts. The analysis revealed municipal strategies and measures undertaken to optimize existing building and urban design regulations by introducing key performance indicators and improving in-house capacity. Furthermore, the analysis revealed that establishing or optimising interdepartmental communication frameworks or platforms is key to strengthening the steering structures governing local climate action. The most common challenge faced by municipalities is related to their role as a regulator and implementers, particularly in budget analysis and instruments for cost recovery of climate action measures. By leading organisational changes related to improving procedures and methods, municipalities can mitigate the various challenges that may emanate from uncoordinated planning and thus promote action against urbanisation-induced warming.

Keywords: urbanisation-induced warming, response measures, municipal planning processes, key performance indicators, interdepartmental communication frameworks, cost recovery

Procedia PDF Downloads 48
260 Sustainable Technology and the Production of Housing

Authors: S. Arias

Abstract:

New housing developments and the technological changes that this implies, adapt the styles of living of its residents, as well as new family structures and forms of work due to the particular needs of a specific group of people which involves different techniques of dealing with, organize, equip and use a particular territory. Currently, own their own space is increasingly important and the cities are faced with the challenge of providing the opportunity for such demands, as well as energy, water and waste removal necessary in the process of construction and occupation of new human settlements. Until the day of today, not has failed to give full response to these demands and needs, resulting in cities that grow without control, badly used land, avenues and congested streets. Buildings and dwellings have an important impact on the environment and on the health of the people, therefore environmental quality associated with the comfort of humans to the sustainable development of natural resources. Applied to architecture, this concept involves the incorporation of new technologies in all the constructive process of a dwelling, changing customs of developers and users, what must be a greater effort in planning energy savings and thus reducing the emissions Greenhouse Gases (GHG) depending on the geographical location where it is planned to develop. Since the techniques of occupation of the territory are not the same everywhere, must take into account that these depend on the geographical, social, political, economic and climatic-environmental circumstances of place, which in modified according to the degree of development reached. In the analysis that must be undertaken to check the degree of sustainability of the place, it is necessary to make estimates of the energy used in artificial air conditioning and lighting. In the same way is required to diagnose the availability and distribution of the water resources used for hygiene and for the cooling of artificially air-conditioned spaces, as well as the waste resulting from these technological processes. Based on the results obtained through the different stages of the analysis, it is possible to perform an energy audit in the process of proposing recommendations of sustainability in architectural spaces in search of energy saving, rational use of water and natural resources optimization. The above can be carried out through the development of a sustainable building code in develop technical recommendations to the regional characteristics of each study site. These codes would seek to build bases to promote a building regulations applicable to new human settlements looking for is generated at the same time quality, protection and safety in them. This building regulation must be consistent with other regulations both national and municipal and State, such as the laws of human settlements, urban development and zoning regulations.

Keywords: building regulations, housing, sustainability, technology

Procedia PDF Downloads 328
259 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination

Authors: Gilberto Goracci, Fabio Curti

Abstract:

This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.

Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field

Procedia PDF Downloads 72
258 Devotional Informant and Diagenetic Alterations, Influences of Facies and Fine Kaolinite Formation Migration on Sandstone’ Reservoir Quality, Sarir Formation, Sirt

Authors: Faraj M. Elkhatri, Hana Ellafi

Abstract:

In recent years, there has been a growing recognition of the potential of marine-based functional foods and combination therapies in promoting a healthy lifestyle and exploring their effectiveness in preventing or treating diseases. The combination of marine bioactive compounds or extracts offers synergistic or enhancement effects through various mechanisms, including multi-target actions, improved bioavailability, enhanced bioactivity, and mitigation of potential adverse effects. Both the green-lipped mussel (GLM) and fucoidan derived from brown seaweed are rich in bioactivities. These two, mussel and fucoidan, have not been previously formulated together. This study aims to combine GLM oil from Perna canaliculus with low molecular weight fucoidan (LMWF) extracted from Undaria pinnatifida to investigate the unique mixture’s anti-inflammatory and antioxidant properties. The cytotoxicity of individual compounds and combinations was assessed using the MTT assay in (THP-1 and RAW264.7) cell lines. The anti-inflammatory activity of mussel-fucoidan was evaluated by treating LPS-stimulated human monocyte and macrophage (THP1-1) cells. Subsequently, the inflammatory cytokines released into the supernatant of these cell lines were quantified via ELISA. Antioxidant activity was determined by using the free radical scavenging assay (DPPH). DPPH assay demonstrated that the radical scavenging activity of the combinations, particularly at concentrations exceeding 1 mg/ml, showed a significantly higher percentage of inhibition when compared to the individual component. This suggests an enhancement effect when the two compounds are combined, leading to increased antioxidant activity. In terms of immunomodulatory activity, the individual compounds exhibited distinct behaviors. GLM oil displayed a higher ability to suppress the cytokine TNF- compared to LMWF. Interestingly, the LMWF fraction, when used individually, did not demonstrate TNF- suppression. However, when combined with GLM, the TNF- suppression (anti-inflammatory) activity of the combination was better than GLM or LWMF alone. This observation underscores the potential for enhancement interactions between the two components in terms of anti-inflammatory properties. This study revealed that each individual compound, LMWF, and GLM, possesses unique and notable bioactivity. The combination of these two individual compounds results in an enhancement effect, where the bioactivity of each is enhanced, creating a superior combination. This suggests that the combination of LMWF and GLM has the potential to offer a more potent and multifaceted therapeutic effect, particularly in the context of antioxidant and anti-inflammatory activities. These findings hold promise for the development of novel therapeutic interventions or supplements that harness the enhancement effects.

Keywords: formation damage, porosity loses, pore throat, quartz cement

Procedia PDF Downloads 38
257 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence

Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang

Abstract:

Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.

Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics

Procedia PDF Downloads 48
256 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂

Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine

Abstract:

Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).

Keywords: devulcanization, recycling, rubber, waste

Procedia PDF Downloads 352
255 The Significance of Urban Space in Death Trilogy of Alejandro González Iñárritu

Authors: Marta Kaprzyk

Abstract:

The cinema of Alejandro González Iñárritu hasn’t been subjected to a lot of detailed analysis yet, what makes it an exceptionally interesting research material. The purpose of this presentation is to discuss the significance of urban space in three films of this Mexican director, that forms Death Trilogy: ‘Amores Perros’ (2000), ‘21 Grams’ (2003) and ‘Babel’ (2006). The fact that in the aforementioned movies the urban space itself becomes an additional protagonist with its own identity, psychology and the ability to transform and affect other characters, in itself warrants for independent research and analysis. Independently, such mode of presenting urban space has another function; it enables the director to complement the rest of characters. The basis for methodology of this description of cinematographic space is to treat its visual layer as a point of departure for a detailed analysis. At the same time, the analysis itself will be supported by recognised academic theories concerning special issues, which are transformed here into essential tools necessary to describe the world (mise-en-scène) created by González Iñárritu. In ‘Amores perros’ the Mexico City serves as a scenery – a place full of contradictions- in the movie depicted as a modern conglomerate and an urban jungle, as well as a labyrinth of poverty and violence. In this work stylistic tropes can be found in an intertextual dialogue of the director with photographies of Nan Goldin and Mary Ellen Mark. The story recounted in ‘21 Grams’, the most tragic piece in the trilogy, is characterised by almost hyperrealistic sadism. It takes place in Memphis, which on the screen turns into an impersonal formation full of heterotopias described by Michel Foucault and non-places, as defined by Marc Augé in his essay. By contrast, the main urban space in ‘Babel’ is Tokio, which seems to perfectly correspond with the image of places discussed by Juhani Pallasmaa in his works concerning the reception of the architecture by ‘pathological senses’ in the modern (or, even more adequately, postmodern) world. It’s portrayed as a city full of buildings that look so surreal, that they seem to be completely unsuitable for the humans to move between them. Ultimately, the aim of this paper is to demonstrate the coherence of the manner in which González Iñárritu designs urban spaces in his Death Trilogy. In particular, the author attempts to examine the imperative role of the cities that form three specific microcosms in which the protagonists of the Mexican director live their overwhelming tragedies.

Keywords: cinematographic space, Death Trilogy, film Studies, González Iñárritu Alejandro, urban space

Procedia PDF Downloads 303
254 Petrology, Geochemistry and Formation Conditions of Metaophiolites of the Loki Crystalline Massif (the Caucasus)

Authors: Irakli Gamkrelidze, David Shengelia, Tamara Tsutsunava, Giorgi Chichinadze, Giorgi Beridze, Ketevan Tedliashvili, Tamara Tsamalashvili

Abstract:

The Loki crystalline massif crops out in the Caucasian region and the geological retrospective represent the northern marginal part of the Baiburt-Sevanian terrain (island arc), bordering with the Paleotethys oceanic basin in the north. The pre-Alpine basement of the massif is built up of Lower-Middle Paleozoic metamorphic complex (metasedimentary and metabasite rocks), Upper Devonian quartz-diorites and Late Variscan granites. Earlier metamorphic complex was considered as an indivisible set including suites with different degree of metamorphism. Systematic geologic, petrologic and geochemical investigations of the massif’s rocks suggest the different conception on composition, structure and formation conditions of the massif. In particular, there are two main rock types in the Loki massif: the oldest autochthonous series of gneissic quartz-diorites and cutting them granites. The massif is flanked on its western side by a volcano-sedimentary sequence, metamorphosed to low-T facies. Petrologic, metamorphic and structural differences in this sequence prove the existence of a number of discrete units (overthrust sheets). One of them, the metabasic sheet represents the fragment of ophiolite complex. It comprises transition types of the second and third layers of the Paleooceanic crust: the upper noncumulated part of the third layer gabbro component and the following lowest part of the parallel diabase dykes of the second layer. The ophiolites are represented by metagabbros, metagabbro-diabases, metadiabases and amphibolite schists. According to the content of petrogenic components and additive elements in metabasites is stated that the protolith of metabasites belongs to petrochemical type of tholeiitic series of basalts. The parental magma of metaophiolites is of E-MORB composition, and by petrochemical parameters, it is very close to the composition of intraplate basalts. The dykes of hypabissal leucocratic siliceous and medium magmatic rocks associated with the metaophiolite sheet form the separate complex. They are granitoids with the extremely low content of CaO and quartz-diorite porphyries. According to various petrochemical parameters, these rocks have mixed characteristics. Their formation took place in spreading conditions or in the areas of manifestation of plumes most likely of island arc type. The metamorphism degree of the metaophiolites corresponds to a very low stage of green schist facies. The rocks of the metaophiolite complex are obducted from the Paleotethys Ocean. Geological and paleomagnetic data show that the primary location of the ocean is supposed to be to the north of the Loki crystalline massif.

Keywords: the Caucasus, crystalline massif, ophiolites, tectonic sheet

Procedia PDF Downloads 255
253 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 81
252 Business Intelligent to a Decision Support Tool for Green Entrepreneurship: Meso and Macro Regions

Authors: Anishur Rahman, Maria Areias, Diogo Simões, Ana Figeuiredo, Filipa Figueiredo, João Nunes

Abstract:

The circular economy (CE) has gained increased awareness among academics, businesses, and decision-makers as it stimulates resource circularity in the production and consumption systems. A large epistemological study has explored the principles of CE, but scant attention eagerly focused on analysing how CE is evaluated, consented to, and enforced using economic metabolism data and business intelligent framework. Economic metabolism involves the ongoing exchange of materials and energy within and across socio-economic systems and requires the assessment of vast amounts of data to provide quantitative analysis related to effective resource management. Limited concern, the present work has focused on the regional flows pilot region from Portugal. By addressing this gap, this study aims to promote eco-innovation and sustainability in the regions of Intermunicipal Communities Região de Coimbra, Viseu Dão Lafões and Beiras e Serra da Estrela, using this data to find precise synergies in terms of material flows and give companies a competitive advantage in form of valuable waste destinations, access to new resources and new markets, cost reduction and risk sharing benefits. In our work, emphasis on applying artificial intelligence (AI) and, more specifically, on implementing state-of-the-art deep learning algorithms is placed, contributing to construction a business intelligent approach. With the emergence of new approaches generally highlighted under the sub-heading of AI and machine learning (ML), the methods for statistical analysis of complex and uncertain production systems are facing significant changes. Therefore, various definitions of AI and its differences from traditional statistics are presented, and furthermore, ML is introduced to identify its place in data science and the differences in topics such as big data analytics and in production problems that using AI and ML are identified. A lifecycle-based approach is then taken to analyse the use of different methods in each phase to identify the most useful technologies and unifying attributes of AI in manufacturing. Most of macroeconomic metabolisms models are mainly direct to contexts of large metropolis, neglecting rural territories, so within this project, a dynamic decision support model coupled with artificial intelligence tools and information platforms will be developed, focused on the reality of these transition zones between the rural and urban. Thus, a real decision support tool is under development, which will surpass the scientific developments carried out to date and will allow to overcome imitations related to the availability and reliability of data.

Keywords: circular economy, artificial intelligence, economic metabolisms, machine learning

Procedia PDF Downloads 45
251 Exploring Drivers and Barriers to Environmental Supply Chain Management in the Pharmaceutical Industry of Ghana

Authors: Gifty Kumadey, Albert Tchey Agbenyegah

Abstract:

(i) Overview and research goal(s): This study aims to address research gaps in the Ghanaian pharmaceutical industry by examining the impact of environmental supply chain management (ESCM) practices on environmental and operational performance. Previous studies have provided inconclusive evidence on the relationship between ESCM practices and environmental and operational performance. The research aims to provide a clearer understanding of the impact of ESCM practices on environmental and operational performance in the context of the Ghanaian pharmaceutical industry. Limited research has been conducted on ESCM practices in developing countries, particularly in Africa. The study aims to bridge this gap by examining the drivers and barriers specific to the pharmaceutical industry in Ghana. The research aims to analyze the impact of ESCM practices on the achievement of Sustainable Development Goals (SDGs) in the Ghanaian pharmaceutical industry, focusing on SDGs 3, 12, 13, and 17. It also explores the potential for partnerships and collaborations to advance ESCM practices in the pharmaceutical industry. The research hypotheses suggest that pressure from stakeholder positively influences the adoption of ESCM practices in the Ghanaian pharmaceutical industry. By addressing these goals, the study aims to contribute to sustainable development initiatives and offer practical recommendations to enhance ESCM A practices in the industry. (ii) Research methods and data: This study uses a quantitative research design to examine the drivers and barriers to environmental supply chain management in the pharmaceutical industry in Accra.The sample size is approximately 150 employees, with senior and middle-level managers from pharmaceutical industry of Ghana. A purposive sampling technique is used to select participants with relevant knowledge and experience in environmental supply chain management. Data will be collected using a structured questionnaire using Likert scale responses. Descriptive statistics will be used to analyze the data and provide insights into current practices and their impact on environmental and operational performance. (iii) Preliminary results and conclusions: Main contributions: Identifying drivers/barriers to ESCM in Ghana's pharmaceutical industry, evaluating current ESCM practices, examining impact on performance, providing practical insights, contributing to knowledge on ESCM in Ghanaian context. The research contributes to SDGs 3, 9, and 12 by promoting sustainable practices and responsible consumption in the industry. The study found that government rules and regulations are the most critical drivers for ESCM adoption, with senior managers playing a significant role. However, employee and competitor pressures have a lesser impact. The industry has made progress in implementing certain ESCM practices, but there is room for improvement in areas like green distribution and reverse logistics. The study emphasizes the importance of government support, management engagement, and comprehensive implementation of ESCM practices in the industry. Future research should focus on overcoming barriers and challenges to effective ESCM implementation.

Keywords: environmental supply chain, sustainable development goal, ghana pharmaceutical industry, government regulations

Procedia PDF Downloads 57
250 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor

Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang

Abstract:

To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.

Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel

Procedia PDF Downloads 329
249 Ammonia Bunkering Spill Scenarios: Modelling Plume’s Behaviour and Potential to Trigger Harmful Algal Blooms in the Singapore Straits

Authors: Bryan Low

Abstract:

In the coming decades, the global maritime industry will face a most formidable environmental challenge -achieving net zero carbon emissions by 2050. To meet this target, the Maritime Port Authority of Singapore (MPA) has worked to establish green shipping and digital corridors with ports of several other countries around the world where ships will use low-carbon alternative fuels such as ammonia for power generation. While this paradigm shift to the bunkering of greener fuels is encouraging, fuels like ammonia will also introduce a new and unique type of environmental risk in the unlikely scenario of a spill. While numerous modelling studies have been conducted for oil spills and their associated environmental impact on coastal and marine ecosystems, ammonia spills are comparatively less well understood. For example, there is a knowledge gap regarding how the complex hydrodynamic conditions of the Singapore Straits may influence the dispersion of a hypothetical ammonia plume, which has different physical and chemical properties compared to an oil slick. Chemically, ammonia can be absorbed by phytoplankton, thus altering the balance of the marine nitrogen cycle. Biologically, ammonia generally serves the role of a nutrient in coastal ecosystems at lower concentrations. However, at higher concentrations, it has been found to be toxic to many local species. It may also have the potential to trigger eutrophication and harmful algal blooms (HABs) in coastal waters, depending on local hydrodynamic conditions. Thus, the key objective of this research paper is to support the development of a model-based forecasting system that can predict ammonia plume behaviour in coastal waters, given prevailing hydrodynamic conditions and their environmental impact. This will be essential as ammonia bunkering becomes more commonplace in Singapore’s ports and around the world. Specifically, this system must be able to assess the HAB-triggering potential of an ammonia plume, as well as its lethal and sub-lethal toxic effects on local species. This will allow the relevant authorities to better plan risk mitigation measures or choose a time window with the ideal hydrodynamic conditions to conduct ammonia bunkering operations with minimal risk. In this paper, we present the first part of such a forecasting system: a jointly coupled hydrodynamic-water quality model that can capture how advection-diffusion processes driven by ocean currents influence plume behaviour and how the plume interacts with the marine nitrogen cycle. The model is then applied to various ammonia spill scenarios where the results are discussed in the context of current ammonia toxicity guidelines, impact on local ecosystems, and mitigation measures for future bunkering operations conducted in the Singapore Straits.

Keywords: ammonia bunkering, forecasting, harmful algal blooms, hydrodynamics, marine nitrogen cycle, oceanography, water quality modeling

Procedia PDF Downloads 46
248 Fabrication of Electrospun Green Fluorescent Protein Nano-Fibers for Biomedical Applications

Authors: Yakup Ulusu, Faruk Ozel, Numan Eczacioglu, Abdurrahman Ozen, Sabriye Acikgoz

Abstract:

GFP discovered in the mid-1970s, has been used as a marker after replicated genetic study by scientists. In biotechnology, cell, molecular biology, the GFP gene is frequently used as a reporter of expression. In modified forms, it has been used to make biosensors. Many animals have been created that express GFP as an evidence that a gene can be expressed throughout a given organism. Proteins labeled with GFP identified locations are determined. And so, cell connections can be monitored, gene expression can be reported, protein-protein interactions can be observed and signals that create events can be detected. Additionally, monitoring GFP is noninvasive; it can be detected by under UV-light because of simply generating fluorescence. Moreover, GFP is a relatively small and inert molecule, that does not seem to treat any biological processes of interest. The synthesis of GFP has some steps like, to construct the plasmid system, transformation in E. coli, production and purification of protein. GFP carrying plasmid vector pBAD–GFPuv was digested using two different restriction endonuclease enzymes (NheI and Eco RI) and DNA fragment of GFP was gel purified before cloning. The GFP-encoding DNA fragment was ligated into pET28a plasmid using NheI and Eco RI restriction sites. The final plasmid was named pETGFP and DNA sequencing of this plasmid indicated that the hexa histidine-tagged GFP was correctly inserted. Histidine-tagged GFP was expressed in an Escherichia coli BL21 DE3 (pLysE) strain. The strain was transformed with pETGFP plasmid and grown on LuiraBertoni (LB) plates with kanamycin and chloramphenicol selection. E. coli cells were grown up to an optical density (OD 600) of 0.8 and induced by the addition of a final concentration of 1mM isopropyl-thiogalactopyranoside (IPTG) and then grown for additional 4 h. The amino-terminal hexa-histidine-tag facilitated purification of the GFP by using a His Bind affinity chromatography resin (Novagen). Purity of GFP protein was analyzed by a 12 % sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE). The concentration of protein was determined by UV absorption at 280 nm (Varian Cary 50 Scan UV/VIS spectrophotometer). Synthesis of GFP-Polymer composite nanofibers was produced by using GFP solution (10mg/mL) and polymer precursor Polyvinylpyrrolidone, (PVP, Mw=1300000) as starting materials and template, respectively. For the fabrication of nanofibers with the different fiber diameter; a sol–gel solution comprising of 0.40, 0.60 and 0.80 g PVP (depending upon the desired fiber diameter) and 100 mg GFP in 10 mL water: ethanol (3:2) mixtures were prepared and then the solution was covered on collecting plate via electro spinning at 10 kV with a feed-rate of 0.25 mL h-1 using Spellman electro spinning system. Results show that GFP-based nano-fiber can be used plenty of biomedical applications such as bio-imaging, bio-mechanic, bio-material and tissue engineering.

Keywords: biomaterial, GFP, nano-fibers, protein expression

Procedia PDF Downloads 286
247 Brazilian Brown Propolis as a Natural Source against Leishmania amazonensis

Authors: Victor Pena Ribeiro, Caroline Arruda, Jennyfer Andrea Aldana Mejia, Jairo Kenupp Bastos

Abstract:

Leishmaniasis is a serious health problem around the world. The treatment of infected individuals with pentavalent antimonial drugs is the main therapeutic strategy. However, they present high toxicity and persistence side effects. Therefore, the discovery of new and safe natural-derived therapeutic agents against leishmaniasis is important. Propolis is a resin of viscous consistency produced by Apis mellifera bees from parts of plants. The main types of Brazilian propolis are green, red, yellow and brown. Thus, the aim of this work was to investigate the chemical composition and leishmanicidal properties of a brown propolis (BP). For this purpose, the hydroalcoholic crude extract of BP was obtained and was fractionated by liquid-liquid chromatography. The chemical profile of the extract and its fractions were obtained by HPLC-UV-DAD. The fractions were submitted to preparative HPLC chromatography for isolation of the major compounds of each fraction. They were analyzed by NMR for structural determination. The volatile compounds were obtained by hydrodistillation and identified by GC/MS. Promastigote forms of Leishmania amazonensis were cultivated in M199 medium and then 2×106 parasites.mL-1 were incubated in 96-well microtiter plates with the samples. The BP was dissolved in dimethyl sulfoxide (DMSO) and diluted into the medium, to give final concentrations of 1.56, 3.12, 6.25, 12.5, 25 and 50 µg.mL⁻¹. The plates were incubated at 25ºC for 24 h, and the lysis percentage was determined by using a Neubauer chamber. The bioassays were performed in triplicate, using a medium with 0.5% DMSO as a negative control and amphotericin B as a positive control. The leishimnicidal effect against promastigote forms was also evaluated at the same concentrations. Cytotoxicity experiments also were performed in 96-well plates against normal (CHO-k1) and tumor cell lines (AGP01 and HeLa) using XTT colorimetric method. Phenolic compounds, flavonoids, and terpenoids were identified in brown propolis. The major compounds were identified as follows: p-coumaric acid (24.6%) for a methanolic fraction, Artepelin-C (29.2%) for ethyl acetate fraction and the compounds of hexane fraction are in the process of structural elucidation. The major volatile compounds identified were β-caryophyllene (10.9%), germacrene D (9.7%), nerolidol (10.8%) and spathulenol (8.5%). The propolis did not show cytotoxicity against normal cell lines (CHO) with IC₅₀ > 100 μg.mL⁻¹, whereas the IC₅₀ < 10 μg.mL⁻¹ showed a potential against the AGP01 cell line, propolis did not demonstrate cytotoxicity against HeLa cell lines IC₅₀ > 100 μg.mL⁻¹. In the determination of the leishmanicidal activity, the highest (50 μg.mL⁻¹) and lowest (1.56 μg.mL⁻¹) concentrations of the crude extract caused the lysis of 76% and 45% of promastigote forms of L. amazonensis, respectively. To the amastigote form, the highest (50 μg.mL⁻¹) and lowest (1.56 μg.mL⁻¹) concentrations caused the mortality of 89% and 75% of L. amazonensis, respectively. The IC₅₀ was 2.8 μg.mL⁻¹ to amastigote form and 3.9 μg.mL⁻¹ to promastigote form, showing a promising activity against Leishmania amazonensis.

Keywords: amastigote, brown propolis, cytotoxicity, promastigote

Procedia PDF Downloads 130