Search results for: ATP utilization
171 Recycling Service Strategy by Considering Demand-Supply Interaction
Authors: Hui-Chieh Li
Abstract:
Circular economy promotes greater resource productivity and avoids pollution through greater recycling and re-use which bring benefits for both the environment and the economy. The concept is contrast to a linear economy which is ‘take, make, dispose’ model of production. A well-design reverse logistics service strategy could enhance the willingness of recycling of the users and reduce the related logistics cost as well as carbon emissions. Moreover, the recycle brings the manufacturers most advantages as it targets components for closed-loop reuse, essentially converting materials and components from worn-out product into inputs for new ones at right time and right place. This study considers demand-supply interaction, time-dependent recycle demand, time-dependent surplus value of recycled product and constructs models on recycle service strategy for the recyclable waste collector. A crucial factor in optimizing a recycle service strategy is consumer demand. The study considers the relationships between consumer demand towards recycle and product characteristics, surplus value and user behavior. The study proposes a recycle service strategy which differs significantly from the conventional and typical uniform service strategy. Periods with considerable demand and large surplus product value suggest frequent and short service cycle. The study explores how to determine a recycle service strategy for recyclable waste collector in terms of service cycle frequency and duration and vehicle type for all service cycles by considering surplus value of recycled product, time-dependent demand, transportation economies and demand-supply interaction. The recyclable waste collector is responsible for the collection of waste product for the manufacturer. The study also examines the impacts of utilization rate on the cost and profit in the context of different sizes of vehicles. The model applies mathematical programming methods and attempts to maximize the total profit of the distributor during the study period. This study applies the binary logit model, analytical model and mathematical programming methods to the problem. The model specifically explores how to determine a recycle service strategy for the recycler by considering product surplus value, time-dependent recycle demand, transportation economies and demand-supply interaction. The model applies mathematical programming methods and attempts to minimize the total logistics cost of the recycler and maximize the recycle benefits of the manufacturer during the study period. The study relaxes the constant demand assumption and examines how service strategy affects consumer demand towards waste recycling. Results of the study not only help understanding how the user demand for recycle service and product surplus value affects the logistics cost and manufacturer’s benefits, but also provide guidance such as award bonus and carbon emission regulations for the government.Keywords: circular economy, consumer demand, product surplus value, recycle service strategy
Procedia PDF Downloads 392170 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran
Authors: Fatemeh Faramarzi, Hosein Mahjoob
Abstract:
Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6
Procedia PDF Downloads 313169 The Examination of Prospective ICT Teachers’ Attitudes towards Application of Computer Assisted Instruction
Authors: Agâh Tuğrul Korucu, Ismail Fatih Yavuzaslan, Lale Toraman
Abstract:
Nowadays, thanks to development of technology, integration of technology into teaching and learning activities is spreading. Increasing technological literacy which is one of the expected competencies for individuals of 21st century is associated with the effective use of technology in education. The most important factor in effective use of technology in education institutions is ICT teachers. The concept of computer assisted instruction (CAI) refers to the utilization of information and communication technology as a tool aided teachers in order to make education more efficient and improve its quality in the process of educational. Teachers can use computers in different places and times according to owned hardware and software facilities and characteristics of the subject and student in CAI. Analyzing teachers’ use of computers in education is significant because teachers are the ones who manage the course and they are the most important element in comprehending the topic by students. To accomplish computer-assisted instruction efficiently is possible through having positive attitude of teachers. Determination the level of knowledge, attitude and behavior of teachers who get the professional knowledge from educational faculties and elimination of deficiencies if any are crucial when teachers are at the faculty. Therefore, the aim of this paper is to identify ICT teachers' attitudes toward computer-assisted instruction in terms of different variables. Research group consists of 200 prospective ICT teachers studying at Necmettin Erbakan University Ahmet Keleşoğlu Faculty of Education CEIT department. As data collection tool of the study; “personal information form” developed by the researchers and used to collect demographic data and "the attitude scale related to computer-assisted instruction" are used. The scale consists of 20 items. 10 of these items show positive feature, while 10 of them show negative feature. The Kaiser-Meyer-Olkin (KMO) coefficient of the scale is found 0.88 and Barlett test significance value is found 0.000. The Cronbach’s alpha reliability coefficient of the scale is found 0.93. In order to analyze the data collected by data collection tools computer-based statistical software package used; statistical techniques such as descriptive statistics, t-test, and analysis of variance are utilized. It is determined that the attitudes of prospective instructors towards computers do not differ according to their educational branches. On the other hand, the attitudes of prospective instructors who own computers towards computer-supported education are determined higher than those of the prospective instructors who do not own computers. It is established that the departments of students who previously received computer lessons do not affect this situation so much. The result is that; the computer experience affects the attitude point regarding the computer-supported education positively.Keywords: computer based instruction, teacher candidate, attitude, technology based instruction, information and communication technologies
Procedia PDF Downloads 294168 Recovery of Food Waste: Production of Dog Food
Authors: K. Nazan Turhan, Tuğçe Ersan
Abstract:
The population of the world is approximately 8 billion, and it increases uncontrollably and irrepressibly, leading to an increase in consumption. This situation causes crucial problems, and food waste is one of these. The Food and Agriculture Organization of the United Nations (FAO) defines food waste as the discarding or alternative utilization of food that is safe and nutritious for the consumption of humans along the entire food supply chain, from primary production to end household consumer level. In addition, according to the estimation of FAO, one-third of all food produced for human consumption is lost or wasted worldwide every year. Wasting food endangers natural resources and causes hunger. For instance, excessive amounts of food waste cause greenhouse gas emissions, contributing to global warming. Therefore, waste management has been gaining significance in the last few decades at both local and global levels due to the expected scarcity of resources for the increasing population of the world. There are several ways to recover food waste. According to the United States Environmental Protection Agency’s Food Recovery Hierarchy, food waste recovery ways are source reduction, feeding hungry people, feeding animals, industrial uses, composting, and landfill/incineration from the most preferred to the least preferred, respectively. Bioethanol, biodiesel, biogas, agricultural fertilizer and animal feed can be obtained from food waste that is generated by different food industries. In this project, feeding animals was selected as a food waste recovery method and food waste of a plant was used to provide ingredient uniformity. Grasshoppers were used as a protein source. In other words, the project was performed to develop a dog food product by recovery of the plant’s food waste after following some steps. The collected food waste and purchased grasshoppers were sterilized, dried and pulverized. Then, they were all mixed with 60 g agar-agar solution (4%w/v). 3 different aromas were added, separately to the samples to enhance flavour quality. Since there are differences in the required amounts of different species of dogs, fulfilling all nutritional needs is one of the problems. In other words, there is a wide range of nutritional needs in terms of carbohydrates, protein, fat, sodium, calcium, and so on. Furthermore, the requirements differ depending on age, gender, weight, height, and species. Therefore, the product that was developed contains average amounts of each substance so as not to cause any deficiency or surplus. On the other hand, it contains more protein than similar products in the market. The product was evaluated in terms of contamination and nutritional content. For contamination risk, detection of E. coli and Salmonella experiments were performed, and the results were negative. For the nutritional value test, protein content analysis was done. The protein contents of different samples vary between 33.68% and 26.07%. In addition, water activity analysis was performed, and the water activity (aw) values of different samples ranged between 0.2456 and 0.4145.Keywords: food waste, dog food, animal nutrition, food waste recovery
Procedia PDF Downloads 63167 Development of 3D Printed Natural Fiber Reinforced Composite Scaffolds for Maxillofacial Reconstruction
Authors: Sri Sai Ramya Bojedla, Falguni Pati
Abstract:
Nature provides the best of solutions to humans. One such incredible gift to regenerative medicine is silk. The literature has publicized a long appreciation for silk owing to its incredible physical and biological assets. Its bioactive nature, unique mechanical strength, and processing flexibility make us curious to explore further to apply it in the clinics for the welfare of mankind. In this study, Antheraea mylitta and Bombyx mori silk fibroin microfibers are developed by two economical and straightforward steps via degumming and hydrolysis for the first time, and a bioactive composite is manufactured by mixing silk fibroin microfibers at various concentrations with polycaprolactone (PCL), a biocompatible, aliphatic semi-crystalline synthetic polymer. Reconstructive surgery in any part of the body except for the maxillofacial region deals with replacing its function. But answering both the aesthetics and function is of utmost importance when it comes to facial reconstruction as it plays a critical role in the psychological and social well-being of the patient. The main concern in developing adequate bone graft substitutes or a scaffold is the noteworthy variation in each patient's bone anatomy. Additionally, the anatomical shape and size will vary based on the type of defect. The advent of additive manufacturing (AM) or 3D printing techniques to bone tissue engineering has facilitated overcoming many of the restraints of conventional fabrication techniques. The acquired patient's CT data is converted into a stereolithographic (STL)-file which is further utilized by the 3D printer to create a 3D scaffold structure in an interconnected layer-by-layer fashion. This study aims to address the limitations of currently available materials and fabrication technologies and develop a customized biomaterial implant via 3D printing technology to reconstruct complex form, function, and aesthetics of the facial anatomy. These composite scaffolds underwent structural and mechanical characterization. Atomic force microscopic (AFM) and field emission scanning electron microscopic (FESEM) images showed the uniform dispersion of the silk fibroin microfibers in the PCL matrix. With the addition of silk, there is improvement in the compressive strength of the hybrid scaffolds. The scaffolds with Antheraea mylitta silk revealed higher compressive modulus than that of Bombyx mori silk. The above results of PCL-silk scaffolds strongly recommend their utilization in bone regenerative applications. Successful completion of this research will provide a great weapon in the maxillofacial reconstructive armamentarium.Keywords: compressive modulus, 3d printing, maxillofacial reconstruction, natural fiber reinforced composites, silk fibroin microfibers
Procedia PDF Downloads 197166 Regional Dynamics of Innovation and Entrepreneurship in the Optics and Photonics Industry
Authors: Mustafa İlhan Akbaş, Özlem Garibay, Ivan Garibay
Abstract:
The economic entities in innovation ecosystems form various industry clusters, in which they compete and cooperate to survive and grow. Within a successful and stable industry cluster, the entities acquire different roles that complement each other in the system. The universities and research centers have been accepted to have a critical role in these systems for the creation and development of innovations. However, the real effect of research institutions on regional economic growth is difficult to assess. In this paper, we present our approach for the identification of the impact of research activities on the regional entrepreneurship for a specific high-tech industry: optics and photonics. The optics and photonics has been defined as an enabling industry, which combines the high-tech photonics technology with the developing optics industry. The recent literature suggests that the growth of optics and photonics firms depends on three important factors: the embedded regional specializations in the labor market, the research and development infrastructure, and a dynamic small firm network capable of absorbing new technologies, products and processes. Therefore, the role of each factor and the dynamics among them must be understood to identify the requirements of the entrepreneurship activities in optics and photonics industry. There are three main contributions of our approach. The recent studies show that the innovation in optics and photonics industry is mostly located around metropolitan areas. There are also studies mentioning the importance of research center locations and universities in the regional development of optics and photonics industry. These studies are mostly limited with the number of patents received within a short period of time or some limited survey results. Therefore the first contribution of our approach is conducting a comprehensive analysis for the state and recent history of the photonics and optics research in the US. For this purpose, both the research centers specialized in optics and photonics and the related research groups in various departments of institutions (e.g. Electrical Engineering, Materials Science) are identified and a geographical study of their locations is presented. The second contribution of the paper is the analysis of regional entrepreneurship activities in optics and photonics in recent years. We use the membership data of the International Society for Optics and Photonics (SPIE) and the regional photonics clusters to identify the optics and photonics companies in the US. Then the profiles and activities of these companies are gathered by extracting and integrating the related data from the National Establishment Time Series (NETS) database, ES-202 database and the data sets from the regional photonics clusters. The number of start-ups, their employee numbers and sales are some examples of the extracted data for the industry. Our third contribution is the utilization of collected data to investigate the impact of research institutions on the regional optics and photonics industry growth and entrepreneurship. In this analysis, the regional and periodical conditions of the overall market are taken into consideration while discovering and quantifying the statistical correlations.Keywords: entrepreneurship, industrial clusters, optics, photonics, emerging industries, research centers
Procedia PDF Downloads 406165 Thermo-Economic Evaluation of Sustainable Biogas Upgrading via Solid-Oxide Electrolysis
Authors: Ligang Wang, Theodoros Damartzis, Stefan Diethelm, Jan Van Herle, François Marechal
Abstract:
Biogas production from anaerobic digestion of organic sludge from wastewater treatment as well as various urban and agricultural organic wastes is of great significance to achieve a sustainable society. Two upgrading approaches for cleaned biogas can be considered: (1) direct H₂ injection for catalytic CO₂ methanation and (2) CO₂ separation from biogas. The first approach usually employs electrolysis technologies to generate hydrogen and increases the biogas production rate; while the second one usually applies commercially-available highly-selective membrane technologies to efficiently extract CO₂ from the biogas with the latter being then sent afterward for compression and storage for further use. A straightforward way of utilizing the captured CO₂ is on-site catalytic CO₂ methanation. From the perspective of system complexity, the second approach may be questioned, since it introduces an additional expensive membrane component for producing the same amount of methane. However, given the circumstance that the sustainability of the produced biogas should be retained after biogas upgrading, renewable electricity should be supplied to drive the electrolyzer. Therefore, considering the intermittent nature and seasonal variation of renewable electricity supply, the second approach offers high operational flexibility. This indicates that these two approaches should be compared based on the availability and scale of the local renewable power supply and not only the technical systems themselves. Solid-oxide electrolysis generally offers high overall system efficiency, and more importantly, it can achieve simultaneous electrolysis of CO₂ and H₂O (namely, co-electrolysis), which may bring significant benefits for the case of CO₂ separation from the produced biogas. When taking co-electrolysis into account, two additional upgrading approaches can be proposed: (1) direct steam injection into the biogas with the mixture going through the SOE, and (2) CO₂ separation from biogas which can be used later for co-electrolysis. The case study of integrating SOE to a wastewater treatment plant is investigated with wind power as the renewable power. The dynamic production of biogas is provided on an hourly basis with the corresponding oxygen and heating requirements. All four approaches mentioned above are investigated and compared thermo-economically: (a) steam-electrolysis with grid power, as the base case for steam electrolysis, (b) CO₂ separation and co-electrolysis with grid power, as the base case for co-electrolysis, (c) steam-electrolysis and CO₂ separation (and storage) with wind power, and (d) co-electrolysis and CO₂ separation (and storage) with wind power. The influence of the scale of wind power supply is investigated by a sensitivity analysis. The results derived provide general understanding on the economic competitiveness of SOE for sustainable biogas upgrading, thus assisting the decision making for biogas production sites. The research leading to the presented work is funded by European Union’s Horizon 2020 under grant agreements n° 699892 (ECo, topic H2020-JTI-FCH-2015-1) and SCCER BIOSWEET.Keywords: biogas upgrading, solid-oxide electrolyzer, co-electrolysis, CO₂ utilization, energy storage
Procedia PDF Downloads 155164 Development and Implementation of Early Childhood Media Literacy Education Program
Authors: Kim Haekyoung, Au Yunkyoung
Abstract:
As digital technology continues to advance and become more widely accessible, young children are also growing up experiencing various media from infancy. In this changing environment, educating young children on media literacy has become an increasingly important task. With the diversification of media, it has become more necessary for children to understand, utilize, and critically explore the meaning of multimodal texts, which include text, images, and sounds connected to each other. Early childhood is a period when media literacy can bloom, and educational and policy support are needed to enable young children to express their opinions, communicate, and participate fully. However, most current media literacy education for young children focuses solely on teaching how to use media, with limited practical application and utilization. Therefore, this study aims to develop an inquiry-based media literacy education program for young children using topic-specific media content and explore the program's potential and impact on children's media literacy learning. Based on a theoretical and literature review on media literacy education, analysis of existing educational programs, and a survey on the current status and teacher perception of media literacy education for young children, this study developed a media literacy education program for young children considering the components of media literacy (understanding media characteristics, self-regulation, self-expression, critical understanding, ethical norms, social communication). To verify the effectiveness of the program, it was implemented with 20 five-year-old children from C City S Kindergarten, starting from March 24 to May 26, 2022, once a week for a total of 6 sessions. To explore quantitative changes before and after program implementation, repeated-measures analysis of variance was conducted, and qualitative analysis was used to analyze observed changes in the process. significant improvement in media literacy levels, such as understanding media characteristics, self-regulation, self-expression, critical understanding, ethical norms, and social communication. The developed inquiry-based media literacy education program for young children in this study can be effectively applied to enhance children's media literacy education and help improve their media literacy levels. Observed changes in the process also confirmed that children improved their ability to learn various topics, express their thoughts, and communicate with others using media content. These findings emphasize the importance of developing and implementing media literacy education programs and can help children develop the ability to safely and effectively use media in their media environment. Based on exploring the potential and impact of the inquiry-based media literacy education program for young children, this study confirmed positive changes in children's media literacy levels as a result of the program's implementation. These findings suggest that beyond education on how to use media, it can help develop children's ability to safely and effectively use media in their media environment. Furthermore, to improve children's media literacy levels and create a safe media environment, a variety of content and methodologies are needed, and continuous development and evaluation of educational programs are anticipated.Keywords: young children, media literacy, media literacy education program, media content
Procedia PDF Downloads 71163 Eco-Nanofiltration Membranes: Nanofiltration Membrane Technology Utilization-Based Fiber Pineapple Leaves Waste as Solutions for Industrial Rubber Liquid Waste Processing and Fertilizer Crisis in Indonesia
Authors: Andi Setiawan, Annisa Ulfah Pristya
Abstract:
Indonesian rubber plant area reached 2.9 million hectares with productivity reached 1.38 million. High rubber productivity is directly proportional to the amount of waste produced rubber processing industry. Rubber industry would produce a negative impact on the rubber industry in the form of environmental pollution caused by waste that has not been treated optimally. Rubber industrial wastewater containing high-nitrogen compounds (nitrate and ammonia) and phosphate compounds which cause water pollution and odor problems due to the high ammonia content. On the other hand, demand for NPK fertilizers in Indonesia continues to increase from year to year and in need of ammonia and phosphate as raw material. Based on domestic demand, it takes a year to 400,000 tons of ammonia and Indonesia imports 200,000 tons of ammonia per year valued at IDR 4.2 trillion. As well, the lack of phosphoric acid to be imported from Jordan, Morocco, South Africa, the Philippines, and India as many as 225 thousand tons per year. During this time, the process of wastewater treatment is generally done with a rubber on the tank to contain the waste and then precipitated, filtered and the rest released into the environment. However, this method is inefficient and thus require high energy costs because through many stages before producing clean water that can be discharged into the river. On the other hand, Indonesia has the potential of pineapple fruit can be harvested throughout the year in all of Indonesia. In 2010, production reached 1,406,445 tons of pineapple in Indonesia or about 9.36 percent of the total fruit production in Indonesia. Increased productivity is directly proportional to the amount of pineapple waste pineapple leaves are kept continuous and usually just dumped in the ground or disposed of with other waste at the final disposal. Through Eco-Nanofiltration Membrane-Based Fiber Pineapple leaves Waste so that environmental problems can be solved efficiently. Nanofiltration is a process that uses pressure as a driving force that can be either convection or diffusion of each molecule. Nanofiltration membranes that can split water to nano size so as to separate the waste processed residual economic value that N and P were higher as a raw material for the manufacture of NPK fertilizer to overcome the crisis in Indonesia. The raw materials were used to manufacture Eco-Nanofiltration Membrane is cellulose from pineapple fiber which processed into cellulose acetate which is biodegradable and only requires a change of the membrane every 6 months. Expected output target is Green eco-technology so with nanofiltration membranes not only treat waste rubber industry in an effective, efficient and environmentally friendly but also lowers the cost of waste treatment compared to conventional methods.Keywords: biodegradable, cellulose diacetate, fertilizers, pineapple, rubber
Procedia PDF Downloads 446162 Metabolomics Fingerprinting Analysis of Melastoma malabathricum L. Leaf of Geographical Variation Using HPLC-DAD Combined with Chemometric Tools
Authors: Dian Mayasari, Yosi Bayu Murti, Sylvia Utami Tunjung Pratiwi, Sudarsono
Abstract:
Melastoma malabathricum L. is an Indo-Pacific herb that has been traditionally used to treat several ailments such as wounds, dysentery, diarrhea, toothache, and diabetes. This plant is common across tropical Indo-Pacific archipelagos and is tolerant of a range of soils, from low-lying areas subject to saltwater inundation to the salt-free conditions of mountain slopes. How the soil and environmental variation influences secondary metabolite production in the herb, and an understanding of the plant’s utility as traditional medicine, remain largely unknown and unexplored. The objective of this study is to evaluate the variability of the metabolic profiles of M. malabathricum L. across its geographic distribution. By employing high-performance liquid chromatography-diode array detector (HPLC-DAD), a highly established, simple, sensitive, and reliable method was employed for establishing the chemical fingerprints of 72 samples of M. malabathricum L. leaves from various geographical locations in Indonesia. Specimens collected from six terrestrial and archipelago regions of Indonesia were analyzed by HPLC to generate chromatogram peak profiles that could be compared across each region. Data corresponding to the common peak areas of HPLC chromatographic fingerprint were analyzed by hierarchical component analysis (HCA) and principal component analysis (PCA) to extract information on the most significant variables contributing to characterization and classification of analyzed samples data. Principal component values were identified as PC1 and PC2 with 41.14% and 19.32%, respectively. Based on variety and origin, the high-performance liquid chromatography method validated the chemical fingerprint results used to screen the in vitro antioxidant activity of M. malabathricum L. The result shows that the developed method has potential values for the quality of similar M. malabathrium L. samples. These findings provide a pathway for the development and utilization of references for the identification of M. malabathricum L. Our results indicate the importance of considering geographic distribution during field-collection efforts as they demonstrate regional metabolic variation in secondary metabolites of M. malabathricum L., as illustrated by HPLC chromatogram peaks and their antioxidant activities. The results also confirm the utility of this simple approach to a rapid evaluation of metabolic variation between plants and their potential ethnobotanical properties, potentially due to the environments from whence they were collected. This information will facilitate the optimization of growth conditions to suit particular medicinal qualities.Keywords: fingerprint, high performance liquid chromatography, Melastoma malabathricum l., metabolic profiles, principal component analysis
Procedia PDF Downloads 160161 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions
Authors: Gaurangi Saxena, Ravindra Saxena
Abstract:
Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.Keywords: cloud computing, competitive advantage, customer relationship management, grid computing
Procedia PDF Downloads 312160 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools
Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami
Abstract:
The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design
Procedia PDF Downloads 76159 Reinforcing The Nagoya Protocol through a Coherent Global Intellectual Property Framework: Effective Protection for Traditional Knowledge Associated with Genetic Resources in Biodiverse African States
Authors: Oluwatobiloba Moody
Abstract:
On October 12, 2014, the Nagoya Protocol, negotiated by Parties to the Convention on Biological Diversity (CBD), entered into force. The Protocol was negotiated to implement the third objective of the CBD which relates to the fair and equitable sharing of benefits arising from the utilization of genetic resources (GRs). The Protocol aims to ‘protect’ GRs and traditional knowledge (TK) associated with GRs from ‘biopiracy’, through the establishment of a binding international regime on access and benefit sharing (ABS). In reflecting on the question of ‘effectiveness’ in the Protocol’s implementation, this paper argues that the underlying problem of ‘biopiracy’, which the Protocol seeks to address, is one which goes beyond the ABS regime. It rather thrives due to indispensable factors emanating from the global intellectual property (IP) regime. It contends that biopiracy therefore constitutes an international problem of ‘borders’ as much as of ‘regimes’ and, therefore, while the implementation of the Protocol may effectively address the ‘trans-border’ issues which have hitherto troubled African provider countries in their establishment of regulatory mechanisms, it remains unable to address the ‘trans-regime’ issues related to the eradication of biopiracy, especially those issues which involve the IP regime. This is due to the glaring incoherence in the Nagoya Protocol’s implementation and the existing global IP system. In arriving at conclusions, the paper examines the ongoing related discussions within the IP regime, specifically those within the WIPO Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore (IGC) and the WTO TRIPS Council. It concludes that the Protocol’s effectiveness in protecting TK associated with GRs is conditional on the attainment of outcomes, within the ongoing negotiations of the IP regime, which could be implemented in a coherent manner with the Nagoya Protocol. It proposes specific ways to achieve this coherence. Three main methodological steps have been incorporated in the paper’s development. First, a review of data accumulated over a two year period arising from the coordination of six important negotiating sessions of the WIPO Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore. In this respect, the research benefits from reflections on the political, institutional and substantive nuances which have coloured the IP negotiations and which provide both the context and subtext to emerging texts. Second, a desktop review of the history, nature and significance of the Nagoya Protocol, using relevant primary and secondary literature from international and national sources. Third, a comparative analysis of selected biopiracy cases is undertaken for the purpose of establishing the inseparability of the IP regime and the ABS regime in the conceptualization and development of solutions to biopiracy. A comparative analysis of select African regulatory mechanisms (Kenya, South Africa and Ethiopia and the ARIPO Swakopmund Protocol) for the protection of TK is also undertaken.Keywords: biopiracy, intellectual property, Nagoya protocol, traditional knowledge
Procedia PDF Downloads 429158 Effect of Different By-Products on Growth Performance, Carcass Characteristics and Serum Parameters of Growing Simmental Crossbred Cattle
Authors: Fei Wang, Jie Meng, Qingxiang Meng
Abstract:
China is rich in straw and by-product resources, whose utilization has always been a hot topic. The objective of this study was to investigate the effect of feeding soybean straw and wine distiller’s grain as a replacement for corn stover on performance of beef cattle. Sixty Simmental×local crossbred bulls averaging 12 months old and 335.7 ± 39.1 kg of body weight (BW) were randomly assigned into four groups (15 animals per group) and allocated to a diet with 40% maize stover (MSD), a diet with 40% wrapping package maize silage (PMSD), a diet with 12% soybean straw plus 28% maize stover (SSD) and a diet with 12% wine distiller’s grain plus 28% maize stover (WDD). Bulls were fed ad libitum an TMR consisting of 36.0% maize, 12.5% of DDGS, 5.0% of cottonseed meal, 4.0% of soybean meal and 40.0% of by-product as described above. Treatment period lasted for 22 weeks, consisting of 1 week of dietary adaptation. The results showed that dry matter intake (DMI) was significantly higher (P < 0.01) for PMSD group than MSD and SSD groups during 0-7 week and 8-14week, and PMSD and WDD groups had higher (P < 0.05) DMI values than MSD and SSD groups during the whole period. Average daily gain (ADG) values were 1.56, 1.72, 1.68 and 1.58 kg for MSD, PMSD, SSD and WDD groups respectively, although the differences were not significant (P > 0.05). The value of blood sugar concentration was significantly higher (P < 0.01) for MSD group than WDD group, and the blood urea nitrogen concentration of SSD group was lower (P < 0.05) than MSD and WDD groups. No significant difference (P > 0.05) of serum total cholesterol, triglycerides or total protein content was observed among the different groups. Ten bulls with similar body weight were selected at the end of feeding trial and slaughtered for measurement of slaughtering performance, carcass quality and meat chemical composition. SSD group had significantly lower (P < 0.05) shear force value and cooking loss than MSD and PMSD groups. The pH values of MSD and SSD groups were lower (P < 0.05) than PMSD and WDD groups. WDD group had a higher fat color brightness (L*) value than PMSD and SSD groups. There were no significant differences in dressing percentage, meat percentage, top grade meat weight, ribeye area, marbling score, meat color and meat chemical compositions among different dietary treatments. Based on these results, the packed maize stover silage showed a potential of improving the average daily gain and feed intake of beef cattle. Soybean straw had a significant effect on improving the tenderness and reducing cooking loss of beef. In general, soybean straw and packed maize stover silage would be beneficial to nitrogen deposition and showed a potential to substitute maize stover in beef cattle diets.Keywords: beef cattle, by-products, carcass quality, growth performance
Procedia PDF Downloads 517157 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets using an OpenScience Energy System Optimization Model
Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is be clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results is ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA
Procedia PDF Downloads 73156 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 78155 Enhancing Emotional Regulation in Autistic Students with Intellectual Disabilities through Visual Dialogue: An Action Research Study
Authors: Tahmina Huq
Abstract:
This paper presents the findings of an action research study that aimed to investigate the efficacy of a visual dialogue strategy in assisting autistic students with intellectual disabilities in managing their immediate emotions and improving their academic achievements. The research sought to explore the effectiveness of teaching self-regulation techniques as an alternative to traditional approaches involving segregation. The study identified visual dialogue as a valuable tool for promoting self-regulation in this specific student population. Action research was chosen as the methodology due to its suitability for immediate implementation of the findings in the classroom. Autistic students with intellectual disabilities often face challenges in controlling their emotions, which can disrupt their learning and academic progress. Conventional methods of intervention, such as isolation and psychologist-assisted approaches, may result in missed classes and hindered academic development. This study introduces the utilization of visual dialogue between students and teachers as an effective self-regulation strategy, addressing the limitations of traditional approaches. Action research was employed as the methodology for this study, allowing for the direct application of the findings in the classroom. The study observed two 15-year-old autistic students with intellectual disabilities who exhibited difficulties in emotional regulation and displayed aggressive behaviors. The research question focused on the effectiveness of visual dialogue in managing the emotions of these students and its impact on their learning outcomes. Data collection methods included personal observations, log sheets, personal reflections, and visual documentation. The study revealed that the implementation of visual dialogue as a self-regulation strategy enabled the students to regulate their emotions within a short timeframe (10 to 30 minutes). Through visual dialogue, they were able to express their feelings and needs in socially appropriate ways. This finding underscores the significance of visual dialogue as a tool for promoting emotional regulation and facilitating active participation in classroom activities. As a result, the students' learning outcomes and social interactions were positively impacted. The findings of this study hold significant implications for educators working with autistic students with intellectual disabilities. The use of visual dialogue as a self-regulation strategy can enhance emotional regulation skills and improve overall academic progress. The action research approach outlined in this paper provides practical guidance for educators in effectively implementing self-regulation strategies within classroom settings. In conclusion, the study demonstrates that visual dialogue is an effective strategy for enhancing emotional regulation in autistic students with intellectual disabilities. By employing visual communication, students can successfully regulate their emotions and actively engage in classroom activities, leading to improved learning outcomes and social interactions. This paper underscores the importance of implementing self-regulation strategies in educational settings to cater to the unique needs of autistic students.Keywords: action research, self-regulation, autism, visual communication
Procedia PDF Downloads 62154 Anxiety Treatment: Comparing Outcomes by Different Types of Providers
Authors: Melissa K. Hord, Stephen P. Whiteside
Abstract:
With lifetime prevalence rates ranging from 6% to 15%, anxiety disorders are among the most common childhood mental health diagnoses. Anxiety disorders diagnosed in childhood generally show an unremitting course, lead to additional psychopathology and interfere with social, emotional, and academic development. Effective evidence-based treatments include cognitive-behavioral therapy (CBT) and selective serotonin reuptake inhibitors (SSRI’s). However, if anxious children receive any treatment, it is usually through primary care, typically consists of medication, and very rarely includes evidence-based psychotherapy. Despite the high prevalence of anxiety disorders, there have only been two independent research labs that have investigated long-term results for CBT treatment for all childhood anxiety disorders and two for specific anxiety disorders. Generally, the studies indicate that the majority of youth maintain gains up to 7.4 years after treatment. These studies have not been replicated. In addition, little is known about the additional mental health care received by these patients in the intervening years after anxiety treatment, which seems likely to influence maintenance of gains for anxiety symptoms as well as the development of additional psychopathology during the subsequent years. The original sample consisted of 335 children ages 7 to 17 years (mean 13.09, 53% female) diagnosed with an anxiety disorder in 2010. Medical record review included provider billing records for mental health appointments during the five years after anxiety treatment. The subsample for this study was classified into three groups: 64 children who received CBT in an anxiety disorders clinic, 56 who received treatment from a psychiatrist, and 10 who were seen in a primary care setting. Chi-square analyses resulted in significant differences in mental health care utilization across the five years after treatment. Youth receiving treatment in primary care averaged less than one appointment each year and the appointments continued at the same rate across time. Children treated by a psychiatrist averaged approximately 3 appointments in the first two years and 2 in the subsequent three years. Importantly, youth treated in the anxiety clinic demonstrated a gradual decrease in mental health appointments across time. The nuanced differences will be presented in greater detail. The results of the current study have important implications for developing dissemination materials to help guide parents when they are selecting treatment for their children. By including all mental health appointments, this study recognizes that anxiety is often comorbid with additional diagnoses and that receiving evidence-based treatment may have long-term benefits that are associated with improvements in broader mental health. One important caveat might be that the acuity of mental health influenced the level of care sought by patients included in this study; however, taking this possibility into account, it seems those seeking care in a primary care setting continued to require similar care at the end of the study, indicating little improvement in symptoms was experienced.Keywords: anxiety, children, mental health, outcomes
Procedia PDF Downloads 267153 A Numerical Hybrid Finite Element Model for Lattice Structures Using 3D/Beam Elements
Authors: Ahmadali Tahmasebimoradi, Chetra Mang, Xavier Lorang
Abstract:
Thanks to the additive manufacturing process, lattice structures are replacing the traditional structures in aeronautical and automobile industries. In order to evaluate the mechanical response of the lattice structures, one has to resort to numerical techniques. Ansys is a globally well-known and trusted commercial software that allows us to model the lattice structures and analyze their mechanical responses using either solid or beam elements. In this software, a script may be used to systematically generate the lattice structures for any size. On the one hand, solid elements allow us to correctly model the contact between the substrates (the supports of the lattice structure) and the lattice structure, the local plasticity, and the junctions of the microbeams. However, their computational cost increases rapidly with the size of the lattice structure. On the other hand, although beam elements reduce the computational cost drastically, it doesn’t correctly model the contact between the lattice structures and the substrates nor the junctions of the microbeams. Also, the notion of local plasticity is not valid anymore. Moreover, the deformed shape of the lattice structure doesn’t correspond to the deformed shape of the lattice structure using 3D solid elements. In this work, motivated by the pros and cons of the 3D and beam models, a numerically hybrid model is presented for the lattice structures to reduce the computational cost of the simulations while avoiding the aforementioned drawbacks of the beam elements. This approach consists of the utilization of solid elements for the junctions and beam elements for the microbeams connecting the corresponding junctions to each other. When the global response of the structure is linear, the results from the hybrid models are in good agreement with the ones from the 3D models for body-centered cubic with z-struts (BCCZ) and body-centered cubic without z-struts (BCC) lattice structures. However, the hybrid models have difficulty to converge when the effect of large deformation and local plasticity are considerable in the BCCZ structures. Furthermore, the effect of the junction’s size of the hybrid models on the results is investigated. For BCCZ lattice structures, the results are not affected by the junction’s size. This is also valid for BCC lattice structures as long as the ratio of the junction’s size to the diameter of the microbeams is greater than 2. The hybrid model can take into account the geometric defects. As a demonstration, the point clouds of two lattice structures are parametrized in a platform called LATANA (LATtice ANAlysis) developed by IRT-SystemX. In this process, for each microbeam of the lattice structures, an ellipse is fitted to capture the effect of shape variation and roughness. Each ellipse is represented by three parameters; semi-major axis, semi-minor axis, and angle of rotation. Having the parameters of the ellipses, the lattice structures are constructed in Spaceclaim (ANSYS) using the geometrical hybrid approach. The results show a negligible discrepancy between the hybrid and 3D models, while the computational cost of the hybrid model is lower than the computational cost of the 3D model.Keywords: additive manufacturing, Ansys, geometric defects, hybrid finite element model, lattice structure
Procedia PDF Downloads 112152 Diverted Use of Contraceptives in Madagascar
Authors: Josiane Yaguibou, Ngoy Kishimba, Issiaka V. Coulibaly, Sabrina Pestilli, Falinirina Razanalison, Hantanirina V. Andremanisa
Abstract:
Background In Madagascar modern contraceptive prevalence rate increased from 18% in 2003 to 43% in 2021. Anecdotal evidence suggests that increased use and frequent stock out in public health facilities of male condoms and medroxyprogesterone acetate (MPA) can be related to diverted use of these products. This study analyzed the use of contraceptives and mode of utilization (correct or diverted) at the community level in the period 2019-2023 in Madagascar. Methodology: The study included a literature review, a quantitative survey combined with a qualitative study. It was carried out in 10 regions out of the 23 of the country. Eight regions (Bongolava, Vakinakaratra, Italy, Hautre Matsiatra, Betsiboka, Diana, Sofia and Anosy) were selected based on a study that showed existence of medroxyprogesterone acetate in pigs (MPA). The remaining 2 regions were selected due to high mCPR (Atsimo Andrefana) and to ensure coverage of all geographical zones in the country (Alaotra Mangoro). Sample random method was used, and the sample size was identified at 300 individuals per region. Zonal distribution is based on the urbanization rate for the region. 6 focus group discussions were organized in 3 regions, equally distributed between rural and urban areas. Key findings: Overall, 67% of those surveyed or their partner are currently using contraception. Injectables (MPA) are the most popular choice (33%), followed by implants and male condoms, 12% and 9%, respectively. The majority of respondents use condoms to prevent unwanted pregnancy but also to prevent STDs. Still, 43% of respondents use condoms for other purposes, reaching 52% of respondents in urban areas and 71,2% in the age group 15-18. Diverted use includes hair growth (18.9%), as a toy (18.8%), cleaning the screen of electronic devices (10 %), cleaning shoes (3.1%) and for skincare (1.6%). Injectables are the preferred method of contraception both in rural areas (35%) and urban areas (21.2%). However, diverted use of injectables was confirmed by 4% of the respondents, ranging from 3 % in rural areas to 12% in urban. The diverted use of injectables in pig rearing was to avoid pregnancy and facilitate pig’s growth. Program Implications: The study confirmed the diverted use of some contraceptives. The misuse of male condoms is among the causes of stockouts of products in public health facilities, limiting their availability for pregnancy and STDs prevention. The misuse of injectables in pigs rearing needs to be further studied to learn the full extent of the misuse and eventual implications for meat consumption. The study highlights the importance of including messages on the correct use of products during sensibilization activities. In particular, messages need to address the anecdotal and false effects of male condoms, especially amongst young people. For misuse of injectables is critical to sensibilize farmers and veterinaries on possible negative effects for humans.Keywords: diverted use, injectables, male condoms, sensibilization
Procedia PDF Downloads 62151 Teaching Timber: The Role of the Architectural Student and Studio Course within an Interdisciplinary Research Project
Authors: Catherine Sunter, Marius Nygaard, Lars Hamran, Børre Skodvin, Ute Groba
Abstract:
Globally, the construction and operation of buildings contribute up to 30% of annual green house gas emissions. In addition, the building sector is responsible for approximately a third of global waste. In this context, the utilization of renewable resources in buildings, especially materials that store carbon, will play a significant role in the growing city. These are two reasons for introducing wood as a building material with a growing relevance. A third is the potential economic value in countries with a forest industry that is not currently used to capacity. In 2013, a four-year interdisciplinary research project titled “Wood Be Better” was created, with the principle goal to produce and publicise knowledge that would facilitate increased use of wood in buildings in urban areas. The research team consisted of architects, engineers, wood technologists and mycologists, both from research institutions and industrial organisations. Five structured work packages were included in the initial research proposal. Work package 2 was titled “Design-based research” and proposed using architecture master courses as laboratories for systematic architectural exploration. The aim was twofold: to provide students with an interdisciplinary team of experts from consultancies and producers, as well as teachers and researchers, that could offer the latest information on wood technologies; whilst at the same time having the studio course test the effects of the use of wood on the functional, technical and tectonic quality within different architectural projects on an urban scale, providing results that could be fed back into the research material. The aim of this article is to examine the successes and failures of this pedagogical approach in an architecture school, as well as the opportunities for greater integration between academic research projects, industry experts and studio courses in the future. This will be done through a set of qualitative interviews with researchers, teaching staff and students of the studio courses held each semester since spring 2013. These will investigate the value of the various experts of the course; the different themes of each course; the response to the urban scale, architectural form and construction detail; the effect of working with the goals of a research project; and the value of the studio projects to the research. In addition, six sample projects will be presented as case studies. These will show how the projects related to the research and could be collected and further analysed, innovative solutions that were developed during the course, different architectural expressions that were enabled by timber, and how projects were used as an interdisciplinary testing ground for integrated architectural and engineering solutions between the participating institutions. The conclusion will reflect on the original intentions of the studio courses, the opportunities and challenges faced by students, researchers and teachers, the educational implications, and on the transparent and inclusive discourse between the architectural researcher, the architecture student and the interdisciplinary experts.Keywords: architecture, interdisciplinary, research, studio, students, wood
Procedia PDF Downloads 311150 Numerical Investigation of Combustion Chamber Geometry on Combustion Performance and Pollutant Emissions in an Ammonia-Diesel Common Rail Dual-Fuel Engine
Authors: Youcef Sehili, Khaled Loubar, Lyes Tarabet, Mahfoudh Cerdoun, Clement Lacroix
Abstract:
As emissions regulations grow more stringent and traditional fuel sources become increasingly scarce, incorporating carbon-free fuels in the transportation sector emerges as a key strategy for mitigating the impact of greenhouse gas emissions. While the utilization of hydrogen (H2) presents significant technological challenges, as evident in the engine limitation known as knocking, ammonia (NH3) provides a viable alternative that overcomes this obstacle and offers convenient transportation, storage, and distribution. Moreover, the implementation of a dual-fuel engine using ammonia as the primary gas is promising, delivering both ecological and economic benefits. However, when employing this combustion mode, the substitution of ammonia at high rates adversely affects combustion performance and leads to elevated emissions of unburnt NH3, especially under high loads, which requires special treatment of this mode of combustion. This study aims to simulate combustion in a common rail direct injection (CRDI) dual-fuel engine, considering the fundamental geometry of the combustion chamber as well as fifteen (15) alternative proposed geometries to determine the configuration that exhibits superior engine performance during high-load conditions. The research presented here focuses on improving the understanding of the equations and mechanisms involved in the combustion of finely atomized jets of liquid fuel and on mastering the CONVERGETM code, which facilitates the simulation of this combustion process. By analyzing the effect of piston bowl shape on the performance and emissions of a diesel engine operating in dual fuel mode, this work combines knowledge of combustion phenomena with proficiency in the calculation code. To select the optimal geometry, an evaluation of the Swirl, Tumble, and Squish flow patterns was conducted for the fifteen (15) studied geometries. Variations in-cylinder pressure, heat release rate, turbulence kinetic energy, turbulence dissipation rate, and emission rates were observed, while thermal efficiency and specific fuel consumption were estimated as functions of crankshaft angle. To maximize thermal efficiency, a synergistic approach involving the enrichment of intake air with oxygen (O2) and the enrichment of primary fuel with hydrogen (H2) was implemented. Based on the results obtained, it is worth noting that the proposed geometry (T8_b8_d0.6/SW_8.0) outperformed the others in terms of flow quality, reduction of pollutants emitted with a reduction of more than 90% in unburnt NH3, and an impressive improvement in engine efficiency of more than 11%.Keywords: ammonia, hydrogen, combustion, dual-fuel engine, emissions
Procedia PDF Downloads 74149 The Sustained Utility of Japan's Human Security Policy
Authors: Maria Thaemar Tana
Abstract:
The paper examines the policy and practice of Japan’s human security. Specifically, it asks the question: How does Japan’s shift towards a more proactive defence posture affect the place of human security in its foreign policy agenda? Corollary to this, how is Japan sustaining its human security policy? The objective of this research is to understand how Japan, chiefly through the Ministry of Foreign Affairs (MOFA) and JICA (Japan International Cooperation Agency), sustains the concept of human security as a policy framework. In addition, the paper also aims to show how and why Japan continues to include the concept in its overall foreign policy agenda. In light of the recent developments in Japan’s security policy, which essentially result from the changing security environment, human security appears to be gradually losing relevance. The paper, however, argues that despite the strategic challenges Japan faced and is facing, as well as the apparent decline of its economic diplomacy, human security remains to be an area of critical importance for Japanese foreign policy. In fact, as Japan becomes more proactive in its international affairs, the strategic value of human security also increases. Human security was initially envisioned to help Japan compensate for its weaknesses in the areas of traditional security, but as Japan moves closer to a more activist foreign policy, the soft policy of human security complements its hard security policies. Using the framework of neoclassical realism (NCR), the paper recognizes that policy-making is essentially a convergence of incentives and constraints at the international and domestic levels. The theory posits that there is no perfect 'transmission belt' linking material power on the one hand, and actual foreign policy on the other. State behavior is influenced by both international- and domestic-level variables, but while systemic pressures and incentives determine the general direction of foreign policy, they are not strong enough to affect the exact details of state conduct. Internal factors such as leaders’ perceptions, domestic institutions, and domestic norms, serve as intervening variables between the international system and foreign policy. Thus, applied to this study, Japan’s sustained utilization of human security as a foreign policy instrument (dependent variable) is essentially a result of systemic pressures (indirectly) (independent variables) and domestic processes (directly) (intervening variables). Two cases of Japan’s human security practice in two regions are examined in two time periods: Iraq in the Middle East (2001-2010) and South Sudan in Africa (2011-2017). The cases show that despite the different motives behind Japan’s decision to participate in these international peacekeepings ad peace-building operations, human security continues to be incorporated in both rhetoric and practice, thus demonstrating that it was and remains to be an important diplomatic tool. Different variables at the international and domestic levels will be examined to understand how the interaction among them results in changes and continuities in Japan’s human security policy.Keywords: human security, foreign policy, neoclassical realism, peace-building
Procedia PDF Downloads 133148 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System
Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko
Abstract:
Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic
Procedia PDF Downloads 61147 The Inclusive Human Trafficking Checklist: A Dialectical Measurement Methodology
Authors: Maria C. Almario, Pam Remer, Jeff Resse, Kathy Moran, Linda Theander Adam
Abstract:
The identification of victims of human trafficking and consequential service provision is characterized by a significant disconnection between the estimated prevalence of this issue and the number of cases identified. This poses as tremendous problem for human rights advocates as it prevents data collection, information sharing, allocation of resources and opportunities for international dialogues. The current paper introduces the Inclusive Human Trafficking Checklist (IHTC) as a measurement methodology with theoretical underpinnings derived from dialectic theory. The presence of human trafficking in a person’s life is conceptualized as a dynamic and dialectic interaction between vulnerability and exploitation. The current papers explores the operationalization of exploitation and vulnerability, evaluates the metric qualities of the instrument, evaluates whether there are differences in assessment based on the participant’s profession, level of knowledge, and training, and assesses if users of the instrument perceive it as useful. A total of 201 participants were asked to rate three vignettes predetermined by experts to qualify as a either human trafficking case or not. The participants were placed in three conditions: business as usual, utilization of the IHTC with and without training. The results revealed a statistically significant level of agreement between the expert’s diagnostic and the application of the IHTC with an improvement of 40% on identification when compared with the business as usual condition While there was an improvement in identification in the group with training, the difference was found to have a small effect size. Participants who utilized the IHTC showed an increased ability to identify elements of identity-based vulnerabilities as well as elements of fraud, which according to the results, are distinctive variables in cases of human trafficking. In terms of the perceived utility, the results revealed higher mean scores for the groups utilizing the IHTC when compared to the business as usual condition. These findings suggest that the IHTC improves appropriate identification of cases and that it is perceived as a useful instrument. The application of the IHTC as a multidisciplinary instrumentation that can be utilized in legal and human services settings is discussed as a pivotal piece of helping victims restore their sense of dignity, and advocate for legal, physical and psychological reparations. It is noteworthy that this study was conducted with a sample in the United States and later re-tested in Colombia. The implications of the instrument for treatment conceptualization and intervention in human trafficking cases are discussed as opportunities for enhancement of victim well-being, restoration engagement and activism. With the idea that what is personal is also political, we believe that the careful observation and data collection in specific cases can inform new areas of human rights activism.Keywords: exploitation, human trafficking, measurement, vulnerability, screening
Procedia PDF Downloads 330146 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 27145 Two-Dimensional Dynamics Motion Simulations of F1 Rare Wing-Flap
Authors: Chaitanya H. Acharya, Pavan Kumar P., Gopalakrishna Narayana
Abstract:
In the realm of aerodynamics, numerous vehicles incorporate moving components to enhance their performance. For instance, airliners deploy hydraulically operated flaps and ailerons during take-off and landing, while Formula 1 racing cars utilize hydraulic tubes and actuators for various components, including the Drag Reduction System (DRS). The DRS, consisting of a rear wing and adjustable flaps, plays a crucial role in overtaking manoeuvres. The DRS has two positions: the default position with the flaps down, providing high downforce, and the lifted position, which reduces drag, allowing for increased speed and aiding in overtaking. Swift deployment of the DRS during races is essential for overtaking competitors. The fluid flow over the rear wing flap becomes intricate during deployment, involving flow reversal and operational changes, leading to unsteady flow physics that significantly influence aerodynamic characteristics. Understanding the drag and downforce during DRS deployment is crucial for determining race outcomes. While experiments can yield accurate aerodynamic data, they can be expensive and challenging to conduct across varying speeds. Computational Fluid Dynamics (CFD) emerges as a cost-effective solution to predict drag and downforce across a range of speeds, especially with the rapid deployment of the DRS. This study employs the finite volume-based solver Ansys Fluent, incorporating dynamic mesh motions and a turbulent model to capture the complex flow phenomena associated with the moving rear wing flap. A dedicated section for the rare wing-flap is considered in the present simulations, and the aerodynamics of these sections closely resemble S1223 aerofoils. Before delving into the simulations of the rare wing-flap aerofoil, numerical results undergo validation using experimental data from an NLR flap aerofoil case, encompassing different flap angles at two distinct angles of attack was carried out. The increase in flap angle as increase in lift and drag is observed for a given angle of attack. The simulation methodology for the rare-wing-flap aerofoil case involves specific time durations before lifting the flap. During this period, drag and downforce values are determined as 330 N and 1800N, respectively. Following the flap lift, a noteworthy reduction in drag to 55 % and a decrease in downforce to 17 % are observed. This understanding is critical for making instantaneous decisions regarding the deployment of the Drag Reduction System (DRS) at specific speeds, thereby influencing the overall performance of the Formula 1 racing car. Hence, this work emphasizes the utilization of dynamic mesh motion methodology to predict the aerodynamic characteristics during the deployment of the DRS in a Formula 1 racing car.Keywords: DRS, CFD, drag, downforce, dynamics mesh motion
Procedia PDF Downloads 94144 Changes of Chemical Composition and Physicochemical Properties of Banana during Ethylene-Induced Ripening
Authors: Chiun-C.R. Wang, Po-Wen Yen, Chien-Chun Huang
Abstract:
Banana is produced in large quantities in tropical and subtropical areas. Banana is one of the important fruits which constitute a valuable source of energy, vitamins and minerals. The ripening and maturity standards of banana vary from country to country depending on the expected shelf life of market. The compositions of bananas change dramatically during ethylene-induced ripening that are categorized as nutritive values and commercial utilization. Nevertheless, there is few study reporting the changes of physicochemical properties of banana starch during ethylene-induced ripening of green banana. The objectives of this study were to investigate the changes of chemical composition and enzyme activity of banana and physicochemical properties of banana starch during ethylene-induced ripening. Green bananas were harvested and ripened by ethylene gas at low temperature (15℃) for seven stages. At each stage, banana was sliced and freeze-dried for banana flour preparation. The changes of total starch, resistant starch, chemical compositions, physicochemical properties, activity of amylase, polyphenolic oxidase (PPO) and phenylalanine ammonia lyase (PAL) of banana were analyzed each stage during ripening. The banana starch was isolated and analyzed for gelatinization properties, pasting properties and microscopic appearance each stage of ripening. The results indicated that the highest total starch and resistant starch content of green banana were 76.2% and 34.6%, respectively at the harvest stage. Both total starch and resistant starch content were significantly declined to 25.3% and 8.8%, respectively at the seventh stage. Soluble sugars content of banana increased from 1.21% at harvest stage to 37.72% at seventh stage during ethylene-induced ripening. Swelling power of banana flour decreased with the progress of ripening stage, but solubility increased. These results strongly related with the decreases of starch content of banana flour during ethylene-induced ripening. Both water insoluble and alcohol insoluble solids of banana flour decreased with the progress of ripening stage. Both activity of PPO and PAL increased, but the total free phenolics content decreased, with the increases of ripening stages. As ripening stage extended, the gelatinization enthalpy of banana starch significantly decreased from 15.31 J/g at the harvest stage to 10.55 J/g at the seventh stage. The peak viscosity and setback increased with the progress of ripening stages in the pasting properties of banana starch. The highest final viscosity, 5701 RVU, of banana starch slurry was found at the seventh stage. The scanning electron micrograph of banana starch showed the shapes of banana starch appeared to be round and elongated forms, ranging in 10-50 μm at the harvest stage. As the banana closed to ripe status, some parallel striations were observed on the surface of banana starch granular which could be caused by enzyme reaction during ripening. These results inferred that the highest resistant starch was found in the green banana at the harvest stage could be considered as a potential application of healthy foods. The changes of chemical composition and physicochemical properties of banana could be caused by the hydrolysis of enzymes during the ethylene-induced ripening treatment.Keywords: ethylene-induced ripening, banana starch, resistant starch, soluble sugars, physicochemical properties, gelatinization enthalpy, pasting characteristics, microscopic appearance
Procedia PDF Downloads 474143 Influence of Dietary Boron on Gut Absorption of Nutrients, Blood Metabolites and Tissue Pathology
Authors: T. Vijay Bhasker, N. K. S Gowda, P. Krishnamoorthy, D. T. Pal, A. K. Pattanaik, A. K. Verma
Abstract:
Boron (B) is a newer trace element and its biological importance and dietary essentiality is unclear in animals. The available literature suggests its putative role in bone mineralization, antioxidant status and steroid hormone synthesis. A feeding trial was conducted in Wister strain (Rattus norvegicus) albino rats for duration of 90 days. A total of 84 healthy weaned (3-4 weeks) experimental rats were randomly divided into 7 dietary groups (4 replicates of three each) viz., A (Basal diet/ Control), B (Basal diet + 5 ppm B), C (Basal diet + 10 ppm B), D (Basal diet + 20 ppm B), E (Basal diet + 40 ppm B), F (Basal diet-Ca 50%), G (Basal diet-Ca 50% + 40 ppm B). Dietary level of calcium (Ca) was maintained at two levels, 100% and 50% of requirement. Sodium borate was used as source of boron along with other ingredients of basal diet while preparing the pelletized diets. All the rats were kept in proper ventilated laboratory animal house maintained at temperature (23±2º C) and humidity (50 to 70%). At the end of experiment digestibility trial was conducted for 5 days to estimate nutrient digestibility and gut absorption of minerals. Eight rats from each group were sacrificed to collect the vital organs (liver, kidney and spleen) to study histopathology. Blood sample was drawn by heart puncture to determine biochemical profile. The average daily feed intake (g/rat/day), water intake (ml/rat/day) and body weight gain (g/rat/day) were similar among the dietary groups. The digestibility (%) of organic matter and crude fat were significantly improved (P < 0.05) was by B supplementation. The gut absorption (%) Ca was significantly increased (P < 0.01) in B supplemented groups compared to control. However, digestibility of dry matter and crude protein, gut absorption of magnesium and phosphorus showed a non-significant increasing trend with B supplementation. The gut absorption (%) of B (P < 0.01) was significantly lowered (P<0.05) in supplemented groups compared to un-supplemented ones. The serum level of triglycerides (mg/dL), HDL-cholesterol (mg/dL) and alanine transaminase (IU/L) were significantly lowered (P < 0.05) in B supplemented groups. While serum level of glucose (mg/dL) and alkaline phosphatase (KA units) showed a non-significant decreasing trend with B supplementation. However the serum levels of total cholesterol (mg/dL) and aspartate transaminase (IU/L) were similar among dietary groups. The histology sections of kidney and spleen revealed no significant changes among the dietary groups and were observed to be normal in anatomical architecture. However, the liver histology revealed cell degenerative changes with vacuolar degeneration and nuclear condensation in Ca deficient groups. But the comparative degenerative changes were mild in 40 ppm B supplemented Ca deficient group. In conclusion, dietary supplementation of graded levels of boron in rats had a positive effect on metabolism and health by improving nutrient digestibility and gut absorption of Ca. This indicates the beneficial role of dietary boron supplementation.Keywords: boron, calcium, nutrient utilization, histopathology
Procedia PDF Downloads 318142 Sustainability in Space: Material Efficiency in Space Missions
Authors: Hamda M. Al-Ali
Abstract:
From addressing fundamental questions about the history of the solar system to exploring other planets for any signs of life have always been the core of human space exploration. This triggered humans to explore whether other planets such as Mars could support human life on them. Therefore, many planned space missions to other planets have been designed and conducted to examine the feasibility of human survival on them. However, space missions are expensive and consume a large number of various resources to be successful. To overcome these problems, material efficiency shall be maximized through the use of reusable launch vehicles (RLV) rather than disposable and expendable ones. Material efficiency is defined as a way to achieve service requirements using fewer materials to reduce CO2 emissions from industrial processes. Materials such as aluminum-lithium alloys, steel, Kevlar, and reinforced carbon-carbon composites used in the manufacturing of spacecrafts could be reused in closed-loop cycles directly or by adding a protective coat. Material efficiency is a fundamental principle of a circular economy. The circular economy aims to cutback waste and reduce pollution through maximizing material efficiency so that businesses can succeed and endure. Five strategies have been proposed to improve material efficiency in the space industry, which includes waste minimization, introduce Key Performance Indicators (KPIs) to measure material efficiency, and introduce policies and legislations to improve material efficiency in the space sector. Another strategy to boost material efficiency is through maximizing resource and energy efficiency through material reusability. Furthermore, the environmental effects associated with the rapid growth in the number of space missions include black carbon emissions that lead to climate change. The levels of emissions must be tracked and tackled to ensure the safe utilization of space in the future. The aim of this research paper is to examine and suggest effective methods used to improve material efficiency in space missions so that space and Earth become more environmentally and economically sustainable. The objectives used to fulfill this aim are to identify the materials used in space missions that are suitable to be reused in closed-loop cycles considering material efficiency indicators and circular economy concepts. An explanation of how spacecraft materials could be re-used as well as propose strategies to maximize material efficiency in order to make RLVs possible so that access to space becomes affordable and reliable is provided. Also, the economic viability of the RLVs is examined to show the extent to which the use of RLVs has on the reduction of space mission costs. The environmental and economic implications of the increase in the number of space missions as a result of the use of RLVs are also discussed. These research questions are studied through detailed critical analysis of the literature, such as published reports, books, scientific articles, and journals. A combination of keywords such as material efficiency, circular economy, RLVs, and spacecraft materials were used to search for appropriate literature.Keywords: access to space, circular economy, material efficiency, reusable launch vehicles, spacecraft materials
Procedia PDF Downloads 113