Search results for: low-temperature heat source
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7191

Search results for: low-temperature heat source

651 Lignin Valorization: Techno-Economic Analysis of Three Lignin Conversion Routes

Authors: Iris Vural Gursel, Andrea Ramirez

Abstract:

Effective utilization of lignin is an important mean for developing economically profitable biorefineries. Current literature suggests that large amounts of lignin will become available in second generation biorefineries. New conversion technologies will, therefore, be needed to carry lignin transformation well beyond combustion to produce energy, but towards high-value products such as chemicals and transportation fuels. In recent years, significant progress on catalysis has been made to improve transformation of lignin, and new catalytic processes are emerging. In this work, a techno-economic assessment of two of these novel conversion routes and comparison with more established lignin pyrolysis route were made. The aim is to provide insights into the potential performance and potential hotspots in order to guide the experimental research and ease the commercialization by early identifying cost drivers, strengths, and challenges. The lignin conversion routes selected for detailed assessment were: (non-catalytic) lignin pyrolysis as the benchmark, direct hydrodeoxygenation (HDO) of lignin and hydrothermal lignin depolymerisation. Products generated were mixed oxygenated aromatic monomers (MOAMON), light organics, heavy organics, and char. For the technical assessment, a basis design followed by process modelling in Aspen was done using experimental yields. A design capacity of 200 kt/year lignin feed was chosen that is equivalent to a 1 Mt/y scale lignocellulosic biorefinery. The downstream equipment was modelled to achieve the separation of the product streams defined. For determining external utility requirement, heat integration was considered and when possible gasses were combusted to cover heating demand. The models made were used in generating necessary data on material and energy flows. Next, an economic assessment was carried out by estimating operating and capital costs. Return on investment (ROI) and payback period (PBP) were used as indicators. The results of the process modelling indicate that series of separation steps are required. The downstream processing was found especially demanding in the hydrothermal upgrading process due to the presence of significant amount of unconverted lignin (34%) and water. Also, external utility requirements were found to be high. Due to the complex separations, hydrothermal upgrading process showed the highest capital cost (50 M€ more than benchmark). Whereas operating costs were found the highest for the direct HDO process (20 M€/year more than benchmark) due to the use of hydrogen. Because of high yields to valuable heavy organics (32%) and MOAMON (24%), direct HDO process showed the highest ROI (12%) and the shortest PBP (5 years). This process is found feasible with a positive net present value. However, it is very sensitive to the prices used in the calculation. The assessments at this stage are associated with large uncertainties. Nevertheless, they are useful for comparing alternatives and identifying whether a certain process should be given further consideration. Among the three processes investigated here, the direct HDO process was seen to be the most promising.

Keywords: biorefinery, economic assessment, lignin conversion, process design

Procedia PDF Downloads 248
650 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming

Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter

Abstract:

High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.

Keywords: hyperelastic, anisotropic, polymer film, thermoforming

Procedia PDF Downloads 604
649 A Geographical Spatial Analysis on the Benefits of Using Wind Energy in Kuwait

Authors: Obaid AlOtaibi, Salman Hussain

Abstract:

Wind energy is associated with many geographical factors including wind speed, climate change, surface topography, environmental impacts, and several economic factors, most notably the advancement of wind technology and energy prices. It is the fastest-growing and least economically expensive method for generating electricity. Wind energy generation is directly related to the characteristics of spatial wind. Therefore, the feasibility study for the wind energy conversion system is based on the value of the energy obtained relative to the initial investment and the cost of operation and maintenance. In Kuwait, wind energy is an appropriate choice as a source of energy generation. It can be used in groundwater extraction in agricultural areas such as Al-Abdali in the north and Al-Wafra in the south, or in fresh and brackish groundwater fields or remote and isolated locations such as border areas and projects away from conventional power electricity services, to take advantage of alternative energy, reduce pollutants, and reduce energy production costs. The study covers the State of Kuwait with an exception of metropolitan area. Climatic data were attained through the readings of eight distributed monitoring stations affiliated with Kuwait Institute for Scientific Research (KISR). The data were used to assess the daily, monthly, quarterly, and annual available wind energy accessible for utilization. The researchers applied the Suitability Model to analyze the study by using the ArcGIS program. It is a model of spatial analysis that compares more than one location based on grading weights to choose the most suitable one. The study criteria are: the average annual wind speed, land use, topography of land, distance from the main road networks, urban areas. According to the previous criteria, the four proposed locations to establish wind farm projects are selected based on the weights of the degree of suitability (excellent, good, average, and poor). The percentage of areas that represents the most suitable locations with an excellent rank (4) is 8% of Kuwait’s area. It is relatively distributed as follows: Al-Shqaya, Al-Dabdeba, Al-Salmi (5.22%), Al-Abdali (1.22%), Umm al-Hayman (0.70%), North Wafra and Al-Shaqeeq (0.86%). The study recommends to decision-makers to consider the proposed location (No.1), (Al-Shqaya, Al-Dabdaba, and Al-Salmi) as the most suitable location for future development of wind farms in Kuwait, this location is economically feasible.

Keywords: Kuwait, renewable energy, spatial analysis, wind energy

Procedia PDF Downloads 130
648 Consumer Knowledge and Behavior in the Aspect of Food Waste

Authors: Katarzyna Neffe-Skocinska, Marzena Tomaszewska, Beata Bilska, Dorota Zielinska, Monika Trzaskowska, Anna Lepecka, Danuta Kolozyn-Krajewska

Abstract:

The aim of the study was to assess Polish consumer behavior towards food waste, including knowledge of information on food labels. The survey was carried out using the CAPI (computer assisted personal interview) method, which involves interviewing the respondent using mobile devices. The research group was a representative sample for Poland due to demographic variables: gender, age, place of residence. A total of 1.115 respondents participated in the study (51.1% were women and 48.9% were men). The questionnaire included questions on five thematic aspects: 1. General knowledge and sources of information on the phenomenon of food waste; 2. Consumption of food after the date of minimum durability; 3. The meanings of the phrase 'best before ...'; 4. Indication of the difference between the meaning of the words 'best before ...' and 'use by'; 5. Indications products marked with the phrase 'best before ...'. It was found that every second surveyed Pole met with the topic of food waste (54.8%). Among the respondents, the most popular source of information related to the research topic was television (89.4%), radio (26%) and the Internet (24%). Over a third of respondents declared that they consume food after the date of minimum durability. Only every tenth (9.8%) respondent does not pay attention to the expiry date and type of consumed products (durable and perishable products). Correctly 39.8% of respondents answered the question: How do you understand the phrase 'best before ...'? In the opinion of 42.8% of respondents, the statements 'best before ...' and 'use by' mean the same thing, while 36% of them think differently. In addition, more than one-fifth of respondents could not respond to the questions. In the case of products of the indication information 'best before ...', more than 40% of the respondents chosen perishable products, e.g., yoghurts and durable, e.g., groats. A slightly lower percentage of indications was recorded for flour (35.1%), sausage (32.8%), canned corn (31.8%), and eggs (25.0%). Based on the assessment of the behavior of Polish consumers towards the phenomenon of food waste, it can be concluded that respondents have elementary knowledge of the study subject. Noteworthy is the good conduct of most respondents in terms of compliance with shelf life and dates of minimum durability of food products. The publication was financed on the basis of an agreement with the National Center for Research and Development No. Gospostrateg 1/385753/1/NCBR/2018 for the implementation and financing of the project under the strategic research and development program social and economic development of Poland in the conditions of globalizing markets – GOSPOSTRATEG - acronym PROM.

Keywords: food waste, shelf life, dates of durability, consumer knowledge and behavior

Procedia PDF Downloads 158
647 Collaboration with Governmental Stakeholders in Positioning Reputation on Value

Authors: Zeynep Genel

Abstract:

The concept of reputation in corporate development comes to the fore as one of the most frequently discussed topics in recent years. Many organizations, which make worldwide investments, make effort in order to adapt themselves to the topics within the scope of this concept and to promote the name of the organization through the values that might become prominent. The stakeholder groups are considered as the most important actors determining the reputation. Even, the effect of stakeholders is not evaluated as a direct factor; it is signed as indirect effects of their perception are a very strong on ultimate reputation. It is foreseen that the parallelism between the projected reputation and the perceived c reputation, which is established as a result of communication experiences perceived by the stakeholders, has an important effect on achieving these objectives. In assessing the efficiency of these efforts, the opinions of stakeholders are widely utilized. In other words, the projected reputation, in which the positive and/or negative reflections of corporate communication play effective role, is measured through how the stakeholders perceptively position the organization. From this perspective, it is thought that the interaction and cooperation of corporate communication professionals with different stakeholder groups during the reputation positioning efforts play significant role in achieving the targeted reputation or in sustainability of this value. The governmental stakeholders having intense communication with mass stakeholder groups are within the most effective stakeholder groups of organization. The most important reason of this is that the organizations, regarding which the governmental stakeholders have positive perception, inspire more confidence to the mass stakeholders. At this point, the organizations carrying out joint projects with governmental stakeholders in parallel with sustainable communication approach come to the fore as the organizations having strong reputation, whereas the reputation of organizations, which fall behind in this regard or which cannot establish the efficiency from this aspect, is thought to be perceived as weak. Similarly, the social responsibility campaigns, in which the governmental stakeholders are involved and which play efficient role in strengthening the reputation, are thought to draw more attention. From this perspective, the role and effect of governmental stakeholders on the reputation positioning is discussed in this study. In parallel with this objective, it is aimed to reveal perspectives of seven governmental stakeholders towards the cooperation in reputation positioning. The sample group representing the governmental stakeholders is examined under the lights of results obtained from in-depth interviews with the executives of different ministries. It is asserted that this study, which aims to express the importance of stakeholder participation in corporate reputation positioning especially in Turkey and the effective role of governmental stakeholders in strong reputation, might provide a new perspective on measuring the corporate reputation, as well as establishing an important source to contribute to the studies in both academic and practical domains.

Keywords: collaborative communications, reputation management, stakeholder engagement, ultimate reputation

Procedia PDF Downloads 213
646 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements

Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo

Abstract:

Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.

Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation

Procedia PDF Downloads 164
645 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System

Authors: Masoud Mirzaee, Ghobad Behzadi Pour

Abstract:

An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.

Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure

Procedia PDF Downloads 225
644 Oncology and Phytomedicine in the Advancement of Cancer Therapy for Better Patient Care

Authors: Hailemeleak Regassa

Abstract:

Traditional medicines use medicinal plants as a source of ingredients, and many modern medications are indirectly derived from plants. Consumers in affluent nations are growing disenchanted with contemporary healthcare and looking for alternatives. Oxidative stress is the primary cause of multiple diseases, and exogenous antioxidant supplementation or strengthening the body's endogenous antioxidant defenses are potential ways to counteract the negative effects of oxidative damage. Plants can biosynthesize non-enzymatic antioxidants that can reduce ROS-induced oxidative damage. Aging often aids the propagation and development of carcinogenesis, and older animals and older people exhibit increased vulnerability to tumor promoters. Cancer is a major public health issue, with several anti-cancer medications in clinical use. Potential drugs such as flavopiridol, roscovitine, combretastatin A-4, betulinic acid, and silvestrol are in the clinical or preclinical stages of research. Methodology: Microbial Growth media, Dimethyl sulfoxide (DMSO), methanol, ethyl acetate, and n-hexane were obtained from Himedia Labs, Mumbai, India. plant were collected from the Herbal Garden of Shoolini University campus, Solan, India (Latitude - 30.8644° N and longitude - 77.1184° E). The identity was confirmed by Dr. Y.S. Parmar University of Horticulture and Forestry, Nauni, Solan (H.P.), India, and documented in Voucher specimens - UHF- Herbarium no. 13784; vide book no. 3818 Receipt No. 086. The plant materials were washed with tap water, and 0.1% mercury chloride for 2 minutes, rinsed with distilled water, air dried, and kept in a hot air oven at 40ºc on blotting paper until all the water evaporated and became well dried for grinding. After drying, the plant materials were grounded using a mixer grinder into fine powder transferred into airtight containers with proper labeling, and stored at 4ºc for future use (Horablaga et al., 2023). The extraction process was done according to Altemimi et al., 2017. The 5g powder was mixed with 15 ml of the respective solvents (n-hexane, ethyl acetate, and methanol), and kept for 4-5 days on the platform shaker. The solvents used are based on their increasing polarity index. Then the extract was centrifuged at 10,000rpm for 5 minutes and filtered using No.1 Whatman filter paper.

Keywords: cancer, phytomedicine, medicinal plants, oncology

Procedia PDF Downloads 48
643 Technology Assessment of the Collection of Cast Seaweed and Use as Feedstock for Biogas Production- The Case of SolrøD, Denmark

Authors: Rikke Lybæk, Tyge Kjær

Abstract:

The Baltic Sea is suffering from nitrogen and phosphorus pollution, which causes eutrophication of the maritime environment and hence threatens the biodiversity of the Baltic Sea area. The intensified quantity of nutrients in the water has created challenges with the growth of seaweed being discarded on beaches around the sea. The cast seaweed has led to odor problems hampering the use of beach areas around the Bay of Køge in Denmark. This is the case in, e.g., Solrød Municipality, where recreational activities have been disrupted when cast seaweed pile up on the beach. Initiatives have, however, been introduced within the municipality to remove the cast seaweed from the beach and utilize it for renewable energy production at the nearby Solrød Biogas Plant, thus being co-digested with animal manure for power and heat production. This paper investigates which type of technology application’s have been applied in the effort to optimize the collection of cast seaweed, and will further reveal, how the seaweed has been pre-treated at the biogas plant to be utilized for energy production the most efficient, hereunder the challenges connected with the content of sand. Heavy metal contents in the seaweed and how it is managed will also be addressed, which is vital as the digestate is utilized as soil fertilizer on nearby farms. Finally, the paper will outline the energy production scheme connected to the use of seaweed as feedstock for biogas production, as well as the amount of nitrogen-rich fertilizer produced. The theoretical approach adopted in the paper relies on the thinking of Circular Bio-Economy, where biological materials are cascaded and re-circulated etc., to increase and extend their value and usability. The data for this research is collected as part of the EU Interreg project “Cluster On Anaerobic digestion, environmental Services, and nuTrients removAL” (COASTAL Biogas), 2014-2020. Data gathering consists of, e.g., interviews with relevant stakeholders connected to seaweed collection and operation of the biogas plant in Solrød Municipality. It further entails studies of progress and evaluation reports from the municipality, analysis of seaweed digestion results from scholars connected to the research, as well as studies of scientific literature to supplement the above. Besides this, observations and photo documentation have been applied in the field. This paper concludes, among others, that the seaweed harvester technology currently adopted is functional in the maritime environment close to the beachfront but inadequate in collecting seaweed directly on the beach. New technology hence needs to be developed to increase the efficiency of seaweed collection. It is further concluded that the amount of sand transported to Solrød Biogas Plant with the seaweed continues to pose challenges. The seaweed is pre-treated for sand in a receiving tank with a strong stirrer, washing off the sand, which ends at the bottom of the tank where collected. The seaweed is then chopped by a macerator and mixed with the other feedstock. The wear down of the receiving tank stirrer and the chopper are, however, significant, and new methods should be adopted.

Keywords: biogas, circular bio-economy, Denmark, maritime technology, cast seaweed, solrød municipality

Procedia PDF Downloads 272
642 Potency of Minapolitan Area Development to Enhance Gross Domestic Product and Prosperty in Indonesia

Authors: Shobrina Silmi Qori Tarlita, Fariz Kukuh Harwinda

Abstract:

Indonesia has 81.000 kilometers coastal line and 70% water surface which is known as the country who has a huge potential in fisheries sector and also which is able to support more than 50 % of Gross Domestic Product. But according to Department of Marine and Fisheries data, fisheries sector supported only 20% of Total GDP in 1998. Not only that, the highest decline in fisheries sector income occured in 2009. Those conditions occur, because of some factors contributed to the lack of integrated working platform for the fisheries and marine management in some areas which have a high productivity to increase the economical profit every year for the country, especially Indonesia, besides the labor requirement for every company, whether a big company or smaller one, depends on the natural condition that makes a lot of people become unemployed if the weather condition or any other conditions dealing with the natural condition is bad for creating fisheries and marine management, especially in aquaculture and fish – captured operation. Not only those, a lot of fishermen, especially in Indonesia, mostly make their job profession as an additional job or side job to fulfill their own needs, although they are averagely poor. Another major problem are the lack of the sustainable developmental program to stabilize the productivity of fisheries and marine natural source, like protecting the environment for fish nursery ground and migration channel, that makes the low productivity of fisheries and marine natural resource, even though the growth of the society in Indonesia has increased for years and needs more food resource to comply the high demand nutrition for living. The development of Minapolitan Area is one of the alternative solution to build a better place for aqua-culturist as well as the fishermen which focusing on systemic and business effort for fisheries and marine management. Minapolitan is kind of integration area which gathers and integrates the ones who is focusing their effort and business in fisheries sector, so that Minapolitan is capable of triggering the fishery activity on the area which using Minapolitan management intensively. From those things, finally, Minapolitan is expected to reinforce the sustainable development through increasing the productivity of fish – capturing operation as well as aquaculture, and it is also expected that Minapolitan will be able to increase GDP, the earning for a lot of people and also will be able to bring prosperity around the world. From those backgrounds, this paper will explain more about the Minapolitan Area and the design of reinforcing the Minapolitan Area by zonation in the Fishery and Marine exploitation area with high productivity as well as low productivity. Hopefully, this solution will be able to answer the economical and social issue for declining food resource, especially fishery and marine resource.

Keywords: Minapolitan, fisheries, economy, Indonesia

Procedia PDF Downloads 451
641 Anatomical and Histochemical Investigation of the Leaf of Vitex agnus-castus L.

Authors: S. Mamoucha, J. Rahul, N. Christodoulakis

Abstract:

Introduction: Nature has been the source of medicinal agents since the dawn of the human existence on Earth. Currently, millions of people, in the developing world, rely on medicinal plants for primary health care, income generation and lifespan improvement. In Greece, more than 5500 plant taxa are reported while about 250 of them are considered to be of great pharmaceutical importance. Among the plants used for medical purposes, Vitex agnus-castus L. (Verbenaceae) is known since ancient times. It is a small tree or shrub, widely distributed in the Mediterranean basin up to the Central Asia. It is also known as chaste tree or monks pepper. Theophrastus mentioned the shrub several times, as ‘agnos’ in his ‘Enquiry into Plants’. Dioscorides mentioned the use of V. agnus-castus for the stimulation of lactation in nursing mothers and the treatment of several female disorders. The plant has important medicinal properties and a long tradition in folk medicine as an antimicrobial, diuretic, digestive and insecticidal agent. Materials and methods: Leaves were cleaned, detached, fixed, sectioned and investigated with light and Scanning Electron Microscopy (SEM). Histochemical tests were executed as well. Specific histochemical reagents (osmium tetroxide, H2SO4, vanillin/HCl, antimony trichloride, Wagner’ s reagent, Dittmar’ s reagent, potassium bichromate, nitroso reaction, ferric chloride and di methoxy benzaldehyde) were used for the sub cellular localization of secondary metabolites. Results: Light microscopical investigations of the elongated leaves of V. agnus-castus revealed three layers of palisade parenchyma, just below the single layered adaxial epidermis. The spongy parenchyma is rather loose. Adaxial epidermal cells are larger in magnitude, compared to those of the abaxial epidermis. Four different types of capitate, secreting trichomes, were localized among the abaxial epidermal cells. Stomata were observed at the abaxial epidermis as well. SEM revealed the interesting arrangement of trichomes. Histochemical treatment on fresh and plastic embedded tissue sections revealed the nature and the sites of secondary metabolites accumulation (flavonoids, steroids, terpenes). Acknowledgment: This work was supported by IKY - State Scholarship Foundation, Athens, Greece.

Keywords: Vitex agnus-castus, leaf anatomy, histochemical reagents, secondary metabolites

Procedia PDF Downloads 369
640 Feasibility of Small Autonomous Solar-Powered Water Desalination Units for Arid Regions

Authors: Mohamed Ahmed M. Azab

Abstract:

The shortage of fresh water is a major problem in several areas of the world such as arid regions and coastal zones in several countries of Arabian Gulf. Fortunately, arid regions are exposed to high levels of solar irradiation most the year, which makes the utilization of solar energy a promising solution to such problem with zero harmful emission (Green System). The main objective of this work is to conduct a feasibility study of utilizing small autonomous water desalination units powered by photovoltaic modules as a green renewable energy resource to be employed in different isolated zones as a source of drinking water for some scattered societies where the installation of huge desalination stations are discarded owing to the unavailability of electric grid. Yanbu City is chosen as a case study where the Renewable Energy Center exists and equipped with all sensors to assess the availability of solar energy all over the year. The study included two types of available water: the first type is brackish well water and the second type is seawater of coastal regions. In the case of well water, two versions of desalination units are involved in the study: the first version is based on day operation only. While the second version takes into consideration night operation also, which requires energy storage system as batteries to provide the necessary electric power at night. According to the feasibility study results, it is found that utilization of small autonomous desalinations unit is applicable and economically accepted in the case of brackish well water. While in the case of seawater the capital costs are extremely high and the cost of desalinated water will not be economically feasible unless governmental subsidies are provided. In addition, the study indicated that, for the same water production, the utilization of energy storage version (day-night) adds additional capital cost for batteries, and extra running cost for their replacement, which makes the unit price not only incompetent with day-only unit but also with conventional units powered by diesel generator (fossil fuel) owing to the low prices of fuel in the kingdom. However, the cost analysis shows that the price of the produced water per cubic meter of day-night unit is similar to that produced from the day-only unit provided that the day-night unit operates theoretically for a longer period of 50%.

Keywords: solar energy, water desalination, reverse osmosis, arid regions

Procedia PDF Downloads 432
639 Geographical Information System and Multi-Criteria Based Approach to Locate Suitable Sites for Industries to Minimize Agriculture Land Use Changes in Bangladesh

Authors: Nazia Muhsin, Tofael Ahamed, Ryozo Noguchi, Tomohiro Takigawa

Abstract:

One of the most challenging issues to achieve sustainable development on food security is land use changes. The crisis of lands for agricultural production mainly arises from the unplanned transformation of agricultural lands to infrastructure development i.e. urbanization and industrialization. Land use without sustainability assessment could have impact on the food security and environmental protections. Bangladesh, as the densely populated country with limited arable lands is now facing challenges to meet sustainable food security. Agricultural lands are using for economic growth by establishing industries. The industries are spreading from urban areas to the suburban areas and using the agricultural lands. To minimize the agricultural land losses for unplanned industrialization, compact economic zones should be find out in a scientific approach. Therefore, the purpose of the study was to find out suitable sites for industrial growth by land suitability analysis (LSA) by using Geographical Information System (GIS) and multi-criteria analysis (MCA). The goal of the study was to emphases both agricultural lands and industries for sustainable development in land use. The study also attempted to analysis the agricultural land use changes in a suburban area by statistical data of agricultural lands and primary data of the existing industries of the study place. The criteria were selected as proximity to major roads, and proximity to local roads, distant to rivers, waterbodies, settlements, flood-flow zones, agricultural lands for the LSA. The spatial dataset for the criteria were collected from the respective departments of Bangladesh. In addition, the elevation spatial dataset were used from the SRTM (Shuttle Radar Topography Mission) data source. The criteria were further analyzed with factors and constraints in ArcGIS®. Expert’s opinion were applied for weighting the criteria according to the analytical hierarchy process (AHP), a multi-criteria technique. The decision rule was set by using ‘weighted overlay’ tool to aggregate the factors and constraints with the weights of the criteria. The LSA found only 5% of land was most suitable for industrial sites and few compact lands for industrial zones. The developed LSA are expected to help policy makers of land use and urban developers to ensure the sustainability of land uses and agricultural production.

Keywords: AHP (analytical hierarchy process), GIS (geographic information system), LSA (land suitability analysis), MCA (multi-criteria analysis)

Procedia PDF Downloads 252
638 Contribution of the Corn Milling Industry to a Global and Circular Economy

Authors: A. B. Moldes, X. Vecino, L. Rodriguez-López, J. M. Dominguez, J. M. Cruz

Abstract:

The concept of the circular economy is focus on the importance of providing goods and services sustainably. Thus, in a future it will be necessary to respond to the environmental contamination and to the use of renewables substrates by moving to a more restorative economic system that drives towards the utilization and revalorization of residues to obtain valuable products. During its evolution our industrial economy has hardly moved through one major characteristic, established in the early days of industrialization, based on a linear model of resource consumption. However, this industrial consumption system will not be maintained during long time. On the other hand, there are many industries, like the corn milling industry, that although does not consume high amount of non renewable substrates, they produce valuable streams that treated accurately, they could provide additional, economical and environmental, benefits by the extraction of interesting commercial renewable products, that can replace some of the substances obtained by chemical synthesis, using non renewable substrates. From this point of view, the use of streams from corn milling industry to obtain surface-active compounds will decrease the utilization of non-renewables sources for obtaining this kind of compounds, contributing to a circular and global economy. However, the success of the circular economy depends on the interest of the industrial sectors in the revalorization of their streams by developing relevant and new business models. Thus, it is necessary to invest in the research of new alternatives that reduce the consumption of non-renewable substrates. In this study is proposed the utilization of a corn milling industry stream to obtain an extract with surfactant capacity. Once the biosurfactant is extracted, the corn milling stream can be commercialized as nutritional media in biotechnological process or as animal feed supplement. Usually this stream is combined with other ingredients obtaining a product namely corn gluten feed or may be sold separately as a liquid protein source for beef and dairy feeding, or as a nutritional pellet binder. Following the productive scheme proposed in this work, the corn milling industry will obtain a biosurfactant extract that could be incorporated in its productive process replacing those chemical detergents, used in some point of its productive chain, or it could be commercialized as a new product of the corn manufacture. The biosurfactants obtained from corn milling industry could replace the chemical surfactants in many formulations, and uses, and it supposes an example of the potential that many industrial streams could offer for obtaining valuable products when they are manage properly.

Keywords: biosurfactantes, circular economy, corn, sustainability

Procedia PDF Downloads 246
637 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 233
636 Characterization of Alloyed Grey Cast Iron Quenched and Tempered for a Smooth Roll Application

Authors: Mohamed Habireche, Nacer E. Bacha, Mohamed Djeghdjough

Abstract:

In the brick industry, smooth double roll crusher is used for medium and fine crushing of soft to medium hard material. Due to opposite inward rotation of the rolls, the feed material is nipped between the rolls and crushed by compression. They are subject to intense wear, known as three-body abrasion, due to the action of abrasive products. The production downtime affecting productivity stems from two sources: the bi-monthly rectification of the roll crushers and their replacement when they are completely worn out. Choosing the right material for the roll crushers should result in longer machine cycles, and reduced repair and maintenance costs. All roll crushers are imported from outside Algeria. This results in sometimes very long delivery times which handicap the brickyards, in particular in respecting delivery times and honored the orders made by customers. The aim of this work is to investigate the effect of alloying additions on microstructure and wear behavior of grey lamellar cast iron for smooth roll crushers in brick industry. The base gray iron was melted in an induction furnace with low frequency at a temperature of 1500 °C, in which return cast iron scrap, new cast iron ingot, and steel scrap were added to the melt to generate the desired composition. The chemical analysis of the bar samples was carried out using Emission Spectrometer Systems PV 8050 Series (Philips) except for the carbon, for which a carbon/sulphur analyser Elementrac CS-i was used. Unetched microstructure was used to evaluate the graphite flake morphology using the image comparison measurement method. At least five different fields were selected for quantitative estimation of phase constituents. The samples were observed under X100 magnification with a Zeiss Axiover T40 MAT optical microscope equipped with a digital camera. SEM microscope equipped with EDS was used to characterize the phases present in the microstructure. The hardness (750 kg load, 5mm diameter ball) was measured with a Brinell testing machine for both treated and as-solidified condition test pieces. The test bars were used for tensile strength and metallographic evaluations. Mechanical properties were evaluated using tensile specimens made as per ASTM E8 standards. Two specimens were tested for each alloy. From each rod, a test piece was made for the tensile test. The results showed that the quenched and tempered alloys had best wear resistance at 400 °C for alloyed grey cast iron (containing 0.62%Mn, 0.68%Cr, and 1.09% Cu) due to fine carbides in the tempered matrix. In quenched and tempered condition, increasing Cu content in cast irons improved its wear resistance moderately. Combined addition of Cu and Cr increases hardness and wear resistance for a quenched and tempered hypoeutectic grey cast iron.

Keywords: casting, cast iron, microstructure, heat treating

Procedia PDF Downloads 93
635 Window Opening Behavior in High-Density Housing Development in Subtropical Climate

Authors: Minjung Maing, Sibei Liu

Abstract:

This research discusses the results of a study of window opening behavior of large housing developments in the high-density megacity of Hong Kong. The methods used for the study involved field observations using photo documentation of the four cardinal elevations (north, south-east, and west) of two large housing developments in a very dense urban area of approx. 46,000 persons per square meter within the city of Hong Kong. The targeted housing developments (A and B) are large public housing with a population of about 13,000 in each development of lower income. However, the mean income level in development A is about 40% higher than development B and home ownership is 60% in development A and 0% in development B. Mapping of the surrounding amenities and layout of the developments were also studied to understand the available activities to the residents. The photo documentation of the elevations was taken from November 2016 to February 2018 to gather a full spectrum of different seasons and both in the morning and afternoon (am/pm) times. From the photograph, the window opening behavior was measured by counting the amount of windows opened as a percentage of all the windows on that façade. For each date of survey data collected, weather data was recorded from weather stations located in the same region to collect temperature, humidity and wind speed. To further understand the behavior, simulation studies of microclimate conditions of the housing development was conducted using the software ENVI-met, a widely used simulation tool by researchers studying urban climate. Four major conclusions can be drawn from the data analysis and simulation results. Firstly, there is little change in the amount of window opening during the different seasons within a temperature range of 10 to 35 degrees Celsius. This means that people who tend to open their windows have consistent window opening behavior throughout the year and high tolerance of indoor thermal conditions. Secondly, for all four elevations the lower-income development B opened more windows (almost two times more units) than higher-income development A meaning window opening behavior had strong correlations with income level. Thirdly, there is a lack of correlation between outdoor horizontal wind speed and window opening behavior, as the changes of wind speed do not seem to affect the action of opening windows in most conditions. Similar to the low correlation between horizontal wind speed and window opening percentage, it is found that vertical wind speed also cannot explain the window opening behavior of occupants. Fourthly, there is a slightly higher average of window opening on the south elevation than the north elevation, which may be due to the south elevation being well shaded from high angle sun during the summer and allowing heat into units from lower angle sun during the winter season. These findings are important to providing insight into how to better design urban environments and indoor thermal environments for a liveable high density city.

Keywords: high-density housing, subtropical climate, urban behavior, window opening

Procedia PDF Downloads 113
634 Glutamine Supplementation and Resistance Traning on Anthropometric Indices, Immunoglobulins, and Cortisol Levels

Authors: Alireza Barari, Saeed Shirali, Ahmad Abdi

Abstract:

Introduction: Exercise has contradictory effects on the immune system. Glutamine supplementation may increase the resistance of the immune system in athletes. The Glutamine is one of the most recognized immune nutrients that as a fuel source, substrate in the synthesis of nucleotides and amino acids and is also known to be part of the antioxidant defense. Several studies have shown that improving glutamine levels in plasma and tissues can have beneficial effects on the function of immune cells such as lymphocytes and neutrophils. This study aimed to investigate the effects of resistance training and training combined with glutamine supplementation to improve the levels of cortisol and immunoglobulin in untrained young men. The research shows that physical training can increase the cytokines in the athlete’s body of course; glutamine can counteract the negative effects of resistance training on immune function and stability of the mast cell membrane. Materials and methods: This semi-experimental study was conducted on 30 male non-athletes. They were randomly divided into three groups: control (no exercise), resistance training, resistance training and glutamine supplementation, respectively. Resistance training for 4 weeks and glutamine supplementation in 0.3 gr/kg/day after practice was applied. The resistance-training program consisted of eight exercises (leg press, lat pull, chest press, squat, seatedrow, abdominal crunch, shoulder press, biceps curl and triceps press down) four times per week. Participants performed 3 sets of 10 repetitions at 60–75% 1-RM. Anthropometry indexes (weight, body mass index, and body fat percentage), oxygen uptake (VO2max) Maximal, cortisol levels of immunoglobulins (IgA, IgG, IgM) were evaluated Pre- and post-test. Results: Results showed four week resistance training with and without glutamine cause significant increase in body weight, BMI and significantly decreased (P < 0/001) in BF. Vo2max also increased in both groups of exercise (P < 0/05) and exercise with glutamine (P < 0/001), such as in both groups significant reduction in IgG (P < 0/05) was observed. But no significant difference observed in levels of cortisol, IgA, IgM in any of the groups. No significant change observed in either parameter in the control group. No significant difference observed between the groups. Discussion: The alterations in the hormonal and immunological parameters can be used in order to assess the effect overload on the body, whether acute or chronically. The plasmatic concentration of glutamine has been associated to the functionality of the immunological system in individuals sub-mitted to intense physical training. resistance training has destructive effects on the immune system and glutamine supplementation cannot neutralize the damaging effects of power exercise on the immune system.

Keywords: glutamine, resistance traning, immuglobulins, cortisol

Procedia PDF Downloads 467
633 Towards a New Spinozistic Democracy: Power and/ or Virtue

Authors: Cetin Balanuye

Abstract:

The present study aims to accomplish two tasks: First, it critically reinterprets the actual relationship between democracy and the modern state in order to show that it is responsible for most of our current political problems and dilemmas. Second, it is argued that this relationship can be reimagined for better, and Spinozistic notions such as ‘conatus’, ‘power’ and ‘virtue’ are crucial in this pursuit. The significance of the present study lies in several interrelated observations: The world has never been a more heterogeneous place than today. People from different religious, cultural and historical backgrounds do equally have 'good reasons' to hold that their world views are the best ones. We have almost no authority to be respected equally by all these different world views. We no longer have gods at once we had in our ancient times. We have three big monotheistic religions, yet the God of which is significantly different from each other. The worse is that the believers of these religions do not seem eager to perform a duet, but rather tend to fight a duel with each other. Thanks to post-modernism, neither reason nor science is any longer seen as universally value-neutral guide to be employed in our search for a common ground. In sum, the question 'how should I live?' has never generated this much diversity before in terms of answers and the answers have never been this much away from a fairly objective evaluation. Our so-called liberal democracies are supposed to perform against this heterogenous, antagonistic and self-sustained web of discursive background. It is argued that our conception of 'State' with a weak emphasis on democracy is not a solution, if not itself the source of this topsy-turvy. Weak emphasis on democracy should be understood here as a kind of liberal democracy which operates in a partisan State, one which takes sides among rivals either for this or against that world view. This conception of State rests on a misleading understanding of the concept of power, and it is argued that it can only be corrected by means of a Spinoza-informed ontology of politics. The role of State in such an ontology is no longer a partisanship of any kind, nor is it representative of all-encompassing authority to favor any world view. State in this Spinozistic ontology equally encourages world views and their discursive practices to let them increase the power of acting and have more power to affect rules and regulations. World views can enhance every medium -in the sense of nonviolence ethology- to increase their power of acting. The more active a world view is, the more powerful and the more virtuous it is in terms of its effective power on the State. Though Spinoza has provided us with a limited guideline to understand what kind of democracy, he actually had in his mind, his ontology developed in Ethics is rich enough to imagine and inspire a better democratic practice to help us sustain the modern State in our extremely pluralistic contemporary societies.

Keywords: democracy, Islam, power, Spinoza

Procedia PDF Downloads 197
632 Monte Carlo Simulation Study on Improving the Flatting Filter-Free Radiotherapy Beam Quality Using Filters from Low- z Material

Authors: H. M. Alfrihidi, H.A. Albarakaty

Abstract:

Flattening filter-free (FFF) photon beam radiotherapy has increased in the last decade, which is enabled by advancements in treatment planning systems and radiation delivery techniques like multi-leave collimators. FFF beams have higher dose rates, which reduces treatment time. On the other hand, FFF beams have a higher surface dose, which is due to the loss of beam hardening effect caused by the presence of the flatting filter (FF). The possibility of improving FFF beam quality using filters from low-z materials such as steel and aluminium (Al) was investigated using Monte Carlo (MC) simulations. The attenuation coefficient of low-z materials for low-energy photons is higher than that of high-energy photons, which leads to the hardening of the FFF beam and, consequently, a reduction in the surface dose. BEAMnrc user code, based on Electron Gamma Shower (EGSnrc) MC code, is used to simulate the beam of a 6 MV True-Beam linac. A phase-space (phosphor) file provided by Varian Medical Systems was used as a radiation source in the simulation. This phosphor file was scored just above the jaws at 27.88 cm from the target. The linac from the jaw downward was constructed, and radiation passing was simulated and scored at 100 cm from the target. To study the effect of low-z filters, steel and Al filters with a thickness of 1 cm were added below the jaws, and the phosphor file was scored at 100 cm from the target. For comparison, the FF beam was simulated using a similar setup. (BEAM Data Processor (BEAMdp) is used to analyse the energy spectrum in the phosphorus files. Then, the dose distribution resulting from these beams was simulated in a homogeneous water phantom using DOSXYZnrc. The dose profile was evaluated according to the surface dose, the lateral dose distribution, and the percentage depth dose (PDD). The energy spectra of the beams show that the FFF beam is softer than the FF beam. The energy peaks for the FFF and FF beams are 0.525 MeV and 1.52 MeV, respectively. The FFF beam's energy peak becomes 1.1 MeV using a steel filter, while the Al filter does not affect the peak position. Steel and Al's filters reduced the surface dose by 5% and 1.7%, respectively. The dose at a depth of 10 cm (D10) rises by around 2% and 0.5% due to using a steel and Al filter, respectively. On the other hand, steel and Al filters reduce the dose rate of the FFF beam by 34% and 14%, respectively. However, their effect on the dose rate is less than that of the tungsten FF, which reduces the dose rate by about 60%. In conclusion, filters from low-z material decrease the surface dose and increase the D10 dose, allowing for a high-dose delivery to deep tumors with a low skin dose. Although using these filters affects the dose rate, this effect is much lower than the effect of the FF.

Keywords: flattening filter free, monte carlo, radiotherapy, surface dose

Procedia PDF Downloads 58
631 Fischer Tropsch Synthesis in Compressed Carbon Dioxide with Integrated Recycle

Authors: Kanchan Mondal, Adam Sims, Madhav Soti, Jitendra Gautam, David Carron

Abstract:

Fischer-Tropsch (FT) synthesis is a complex series of heterogeneous reactions between CO and H2 molecules (present in the syngas) on the surface of an active catalyst (Co, Fe, Ru, Ni, etc.) to produce gaseous, liquid, and waxy hydrocarbons. This product is composed of paraffins, olefins, and oxygenated compounds. The key challenge in applying the Fischer-Tropsch process to produce transportation fuels is to make the capital and production costs economically feasible relative to the comparative cost of existing petroleum resources. To meet this challenge, it is imperative to enhance the CO conversion while maximizing carbon selectivity towards the desired liquid hydrocarbon ranges (i.e. reduction in CH4 and CO2 selectivities) at high throughputs. At the same time, it is equally essential to increase the catalyst robustness and longevity without sacrificing catalyst activity. This paper focuses on process development to achieve the above. The paper describes the influence of operating parameters on Fischer Tropsch synthesis (FTS) from coal derived syngas in supercritical carbon dioxide (ScCO2). In addition, the unreacted gas and solvent recycle was incorporated and the effect of unreacted feed recycle was evaluated. It was expected that with the recycle, the feed rate can be increased. The increase in conversion and liquid selectivity accompanied by the production of narrower carbon number distribution in the product suggest that higher flow rates can and should be used when incorporating exit gas recycle. It was observed that this process was capable of enhancing the hydrocarbon selectivity (nearly 98 % CO conversion), reducing improving the carbon efficiency from 17 % to 51 % in a once through process and further converting 16 % CO2 to liquid with integrated recycle of the product gas stream and increasing the life of the catalyst. Catalyst robustness enhancement has been attributed to the absorption of heat of reaction by the compressed CO2 which reduced the formation of hotspots and the dissolution of waxes by the CO2 solvent which reduced the blinding of active sites. In addition, the recycling the product gas stream reduced the reactor footprint to one-fourth of the once through size and product fractionation utilizing the solvent effects of supercritical CO2 were realized. In addition to the negative CO2 selectivities, methane production was also inhibited and was limited to less than 1.5%. The effect of the process conditions on the life of the catalysts will also be presented. Fe based catalysts are known to have a high proclivity for producing CO2 during FTS. The data of the product spectrum and selectivity on Co and Fe-Co based catalysts as well as those obtained from commercial sources will also be presented. The measurable decision criteria were the increase in CO conversion at H2:CO ratio of 1:1 (as commonly found in coal gasification product stream) in supercritical phase as compared to gas phase reaction, decrease in CO2 and CH4 selectivity, overall liquid product distribution, and finally an increase in the life of the catalysts.

Keywords: carbon efficiency, Fischer Tropsch synthesis, low GHG, pressure tunable fractionation

Procedia PDF Downloads 227
630 Investigating the Influence of Solidification Rate on the Microstructural, Mechanical and Physical Properties of Directionally Solidified Al-Mg Based Multicomponent Eutectic Alloys Containing High Mg Alloys

Authors: Fatih Kılıç, Burak Birol, Necmettin Maraşlı

Abstract:

The directional solidification process is generally used for homogeneous compound production, single crystal growth, and refining (zone refining), etc. processes. The most important two parameters that control eutectic structures are temperature gradient and grain growth rate which are called as solidification parameters The solidification behavior and microstructure characteristics is an interesting topic due to their effects on the properties and performance of the alloys containing eutectic compositions. The solidification behavior of multicomponent and multiphase systems is an important parameter for determining various properties of these materials. The researches have been conducted mostly on the solidification of pure materials or alloys containing two phases. However, there are very few studies on the literature about multiphase reactions and microstructure formation of multicomponent alloys during solidification. Because of this situation, it is important to study the microstructure formation and the thermodynamical, thermophysical and microstructural properties of these alloys. The production process is difficult due to easy oxidation of magnesium and therefore, there is not a comprehensive study concerning alloys containing high Mg (> 30 wt.% Mg). With the increasing amount of Mg inside Al alloys, the specific weight decreases, and the strength shows a slight increase, while due to formation of β-Al8Mg5 phase, ductility lowers. For this reason, production, examination and development of high Mg containing alloys will initiate the production of new advanced engineering materials. The original value of this research can be described as obtaining high Mg containing (> 30% Mg) Al based multicomponent alloys by melting under vacuum; controlled directional solidification with various growth rates at a constant temperature gradient; and establishing relationship between solidification rate and microstructural, mechanical, electrical and thermal properties. Therefore, within the scope of this research, some > 30% Mg containing ternary or quaternary Al alloy compositions were determined, and it was planned to investigate the effects of directional solidification rate on the mechanical, electrical and thermal properties of these alloys. Within the scope of the research, the influence of the growth rate on microstructure parameters, microhardness, tensile strength, electrical conductivity and thermal conductivity of directionally solidified high Mg containing Al-32,2Mg-0,37Si; Al-30Mg-12Zn; Al-32Mg-1,7Ni; Al-32,2Mg-0,37Fe; Al-32Mg-1,7Ni-0,4Si; Al-33,3Mg-0,35Si-0,11Fe (wt.%) alloys with wide range of growth rate (50-2500 µm/s) and fixed temperature gradient, will be investigated. The work can be planned as; (a) directional solidification of Al-Mg based Al-Mg-Si, Al-Mg-Zn, Al-Mg-Ni, Al-Mg-Fe, Al-Mg-Ni-Si, Al-Mg-Si-Fe within wide range of growth rates (50-2500 µm/s) at a constant temperature gradient by Bridgman type solidification system, (b) analysis of microstructure parameters of directionally solidified alloys by using an optical light microscopy and Scanning Electron Microscopy (SEM), (c) measurement of microhardness and tensile strength of directionally solidified alloys, (d) measurement of electrical conductivity by four point probe technique at room temperature (e) measurement of thermal conductivity by linear heat flow method at room temperature.

Keywords: directional solidification, electrical conductivity, high Mg containing multicomponent Al alloys, microhardness, microstructure, tensile strength, thermal conductivity

Procedia PDF Downloads 246
629 Affects Associations Analysis in Emergency Situations

Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko

Abstract:

Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.

Keywords: data mining, emergency phone calls, emotional profiles, rules

Procedia PDF Downloads 395
628 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission

Authors: Tingwei Shu, Dong Zhou, Chengjun Guo

Abstract:

Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.

Keywords: semantic communication, transformer, wavelet transform, data processing

Procedia PDF Downloads 65
627 Analysis of Sea Waves Characteristics and Assessment of Potential Wave Power in Egyptian Mediterranean Waters

Authors: Ahmed A. El-Gindy, Elham S. El-Nashar, Abdallah Nafaa, Sameh El-Kafrawy

Abstract:

The generation of energy from marine energy became one of the most preferable resources since it is a clean source and friendly to environment. Egypt has long shores along Mediterranean with important cities that need energy resources with significant wave energy. No detailed studies have been done on wave energy distribution in the Egyptian waters. The objective of this paper is to assess the energy wave power available in the Egyptian waters for the choice of the most suitable devices to be used in this area. This paper deals the characteristics and power of the offshore waves in the Egyptian waters. Since the field observations of waves are not frequent and need much technical work, the European Centre for Medium-Range Weather Forecasts (ECMWF) interim reanalysis data in Mediterranean, with a grid size 0.75 degree, which is a relatively course grid, are considered in the present study for preliminary assessment of sea waves characteristics and power. The used data covers the period from 2012 to 2014. The data used are significant wave height (swh), mean wave period (mwp) and wave direction taken at six hourly intervals, at seven chosen stations, and at grid points covering the Egyptian waters. The wave power (wp) formula was used to calculate energy flux. Descriptive statistical analysis including monthly means and standard deviations of the swh, mwp, and wp. The percentiles of wave heights and their corresponding power are done, as a tool of choice of the best technology suitable for the site. The surfer is used to show spatial distributions of wp. The analysis of data at chosen 7 stations determined the potential of wp off important Egyptian cities. Offshore of Al Saloum and Marsa Matruh, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and October (1.49-1.69) ± (1.45-1.74) kw/m. In front of Alexandria and Rashid, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and September (1.29-2.01) ± (1.31-1.83) kw/m. In front of Damietta and Port Said, the highest wp occurred in February (14.29-17.61) ± (21.61-27.10) kw/m and the lowest occurred in June (0.94-0.96) ± (0.71-0.72) kw/m. In winter, the probabilities of waves higher than 0.8 m in percentage were, at Al Saloum and Marsa Matruh (76.56-80.33) ± (11.62-12.05), at Alexandria and Rashid (73.67-74.79) ± (16.21-18.59) and at Damietta and Port Said (66.28-68.69) ± (17.88-17.90). In spring, the percentiles were, at Al Saloum and Marsa Matruh, (48.17-50.92) ± (5.79-6.56), at Alexandria and Rashid, (39.38-43.59) ± (9.06-9.34) and at Damietta and Port Said, (31.59-33.61) ± (10.72-11.25). In summer, the probabilities were, at Al Saloum and Marsa Matruh (57.70-66.67) ± (4.87-6.83), at Alexandria and Rashid (59.96-65.13) ± (9.14-9.35) and at Damietta and Port Said (46.38-49.28) ± (10.89-11.47). In autumn, the probabilities were, at Al Saloum and Marsa Matruh (58.75-59.56) ± (2.55-5.84), at Alexandria and Rashid (47.78-52.13) ± (3.11-7.08) and at Damietta and Port Said (41.16-42.52) ± (7.52-8.34).

Keywords: distribution of sea waves energy, Egyptian Mediterranean waters, waves characteristics, waves power

Procedia PDF Downloads 172
626 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling

Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci

Abstract:

Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.

Keywords: land use, spatial resolution, WRF-Chem, air quality assessment

Procedia PDF Downloads 138
625 Corporate Social Responsibility Practices of Local Large Firms in the Developing Economies: The Case of the East Africa Region

Authors: Lilian Kishimbo

Abstract:

This study aims to examine Corporate Social Responsibility (CSR) practices of local large firms of East Africa region. In this study CSR is defined as all actions that go beyond obeying minimum legal requirements as espoused by other authors. Despite the increase of CSR literature empirical evidence clearly demonstrate an imbalance of CSR studies in the developing countries . Moreover, it is evident that most of the research on CSR in developing economies emerges from large fast-growing economies or BRICS members (i.e. Brazil, India, China and South Africa), and Indonesia and Malaysia and a further call for more research in Africa is particularly advocated. Taking Africa as an example, there are scanty researches on CSR practices, and the few available studies are mainly from Nigeria and South Africa leaving other parts of Africa for example East Africa underrepresented. Furthermore, in the face of globalization, experience shows that literature has focused mostly on multinational companies (MNCs) operating in either North-North or North-South and less on South-South indigenous local firms. Thus the existing literature in Africa shows more studies of MNCs and little is known about CSR of local indigenous firms operating in the South particularly in the East Africa region. Accordingly, this paper explores CSR practices of indigenous local large firms of East Africa region particularly Kenya and Tanzania with the aim of testing the hypothesis that do local firms of East Africa region engage in similar CSR practices as firms in other parts of the world?. To answer this question only listed local large firms were considered based on the assumption that they are large enough to engage. Newspapers were the main source of data and information collected was supplemented by business Annual Reports for the period 2010-2012. The research finding revealed that local firms of East Africa engage in CSR practices. However, there are some differences in the set of activities these firms prefers to engage in compared to findings from previous studies. As such some CSR that were given priority by firms in East Africa were less prioritized in the other part of the world including Indonesia. This paper will add knowledge to the body of CSR and experience of CSR practices of South-South indigenous firms where is evidenced to have a relative dearth of literature on CSR. Finally, the paper concludes that local firms of East Africa region engage in similar activities like other firms globally. But firms give more priority to some activities such education and health related activities. Finally, the study intends to assist policy makers at firm’s levels to plan for long lasting projects related to CSR for their stakeholders.

Keywords: Africa, corporate social responsibility, developing countries, indigenous firms, Kenya, Tanzania

Procedia PDF Downloads 396
624 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust

Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin

Abstract:

The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.

Keywords: acoustic impedance, engine exhaust system, FEM model, test stand

Procedia PDF Downloads 38
623 Optical Assessment of Marginal Sealing Performance around Restorations Using Swept-Source Optical Coherence Tomography

Authors: Rima Zakzouk, Yasushi Shimada, Yasunori Sumi, Junji Tagami

Abstract:

Background and purpose: The resin composite has become the main material for the restorations of caries in recent years due to aesthetic characteristics, especially with the development of the adhesive techniques. The quality of adhesion to tooth structures is depending on an exchange process between inorganic tooth material and synthetic resin and a micromechanical retention promoted by resin infiltration in partially demineralized dentin. Optical coherence tomography (OCT) is a noninvasive diagnostic method for obtaining cross-sectional images that produce high-resolution of the biological tissue at the micron scale. The aim of this study was to evaluate the gap formation at adhesive/tooth interface of two-step self-etch adhesives that are preceded with or without phosphoric acid pre-etching in different regions of teeth using SS-OCT. Materials and methods: Round tapered cavities (2×2 mm) were prepared in cervical part of bovine incisors teeth and divided into 2 groups (n=10): first group self-etch adhesive (Clearfil SE Bond) was applied for SE group and second group treated with acid etching before applying the self-etch adhesive for PA group. Subsequently, both groups were restored with Estelite Flow Quick Flowable Composite Resin and observed under OCT. Following 5000 thermal cycles, the same section was obtained again for each cavity using OCT at 1310-nm wavelength. Scanning was repeated after two months to monitor the gap progress. Then the gap length was measured using image analysis software, and the statistics analysis were done between both groups using SPSS software. After that, the cavities were sectioned and observed under Confocal Laser Scanning Microscope (CLSM) to confirm the result of OCT. Results: Gaps formed at the bottom of the cavity was longer than the gap formed at the margin and dento-enamel junction in both groups. On the other hand, pre-etching treatment led to damage the DEJ regions creating longer gap. After 2 months the results showed almost progress in the gap length significantly at the bottom regions in both groups. In conclusions, phosphoric acid etching treatment did not reduce the gap lrngth in most regions of the cavity. Significance: The bottom region of tooth was more exposed to gap formation than margin and DEJ regions, The DEJ damaged with phosphoric acid treatment.

Keywords: optical coherence tomography, self-etch adhesives, bottom, dento enamel junction

Procedia PDF Downloads 206
622 Risk Assessment of Lead Element in Red Peppers Collected from Marketplaces in Antalya, Southern Turkey

Authors: Serpil Kilic, Ihsan Burak Cam, Murat Kilic, Timur Tongur

Abstract:

Interest in the lead (Pb) has considerably increased due to knowledge about the potential toxic effects of this element, recently. Exposure to heavy metals above the acceptable limit affects human health. Indeed, Pb is accumulated through food chains up to toxic concentrations; therefore, it can pose an adverse potential threat to human health. A sensitive and reliable method for determination of Pb element in red pepper were improved in the present study. Samples (33 red pepper products having different brands) were purchased from different markets in Turkey. The selected method validation criteria (linearity, Limit of Detection, Limit of Quantification, recovery, and trueness) demonstrated. Recovery values close to 100% showed adequate precision and accuracy for analysis. According to the results of red pepper analysis, all of the tested lead element in the samples was determined at various concentrations. A Perkin- Elmer ELAN DRC-e model ICP-MS system was used for detection of Pb. Organic red pepper was used to obtain a matrix for all method validation studies. The certified reference material, Fapas chili powder, was digested and analyzed, together with the different sample batches. Three replicates from each sample were digested and analyzed. The results of the exposure levels of the elements were discussed considering the scientific opinions of the European Food Safety Authority (EFSA), which is the European Union’s (EU) risk assessment source associated with food safety. The Target Hazard Quotient (THQ) was described by the United States Environmental Protection Agency (USEPA) for the calculation of potential health risks associated with long-term exposure to chemical pollutants. THQ value contains intake of elements, exposure frequency and duration, body weight and the oral reference dose (RfD). If the THQ value is lower than one, it means that the exposed population is assumed to be safe and 1 < THQ < 5 means that the exposed population is in a level of concern interval. In this study, the THQ of Pb was obtained as < 1. The results of THQ calculations showed that the values were below one for all the tested, meaning the samples did not pose a health risk to the local population. This work was supported by The Scientific Research Projects Coordination Unit of Akdeniz University. Project Number: FBA-2017-2494.

Keywords: lead analyses, red pepper, risk assessment, daily exposure

Procedia PDF Downloads 154