Search results for: voltage output
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3002

Search results for: voltage output

212 Engineering Topology of Ecological Model for Orientation Impact of Sustainability Urban Environments: The Spatial-Economic Modeling

Authors: Moustafa Osman Mohammed

Abstract:

The modeling of a spatial-economic database is crucial in recitation economic network structure to social development. Sustainability within the spatial-economic model gives attention to green businesses to comply with Earth’s Systems. The natural exchange patterns of ecosystems have consistent and periodic cycles to preserve energy and materials flow in systems ecology. When network topology influences formal and informal communication to function in systems ecology, ecosystems are postulated to valence the basic level of spatial sustainable outcome (i.e., project compatibility success). These referred instrumentalities impact various aspects of the second level of spatial sustainable outcomes (i.e., participant social security satisfaction). The sustainability outcomes are modeling composite structure based on a network analysis model to calculate the prosperity of panel databases for efficiency value, from 2005 to 2025. The database is modeling spatial structure to represent state-of-the-art value-orientation impact and corresponding complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic-ecological model; develop a set of sustainability indicators associated with the model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate spatial structure reliability. The structure of spatial-ecological model is established for management schemes from the perspective pollutants of multiple sources through the input–output criteria. These criteria evaluate the spillover effect to conduct Monte Carlo simulations and sensitivity analysis in a unique spatial structure. The balance within “equilibrium patterns,” such as collective biosphere features, has a composite index of many distributed feedback flows. The following have a dynamic structure related to physical and chemical properties for gradual prolong to incremental patterns. While these spatial structures argue from ecological modeling of resource savings, static loads are not decisive from an artistic/architectural perspective. The model attempts to unify analytic and analogical spatial structure for the development of urban environments in a relational database setting, using optimization software to integrate spatial structure where the process is based on the engineering topology of systems ecology.

Keywords: ecological modeling, spatial structure, orientation impact, composite index, industrial ecology

Procedia PDF Downloads 34
211 Improving Student Retention: Enhancing the First Year Experience through Group Work, Research and Presentation Workshops

Authors: Eric Bates

Abstract:

Higher education is recognised as being of critical importance in Ireland and has been linked as a vital factor to national well-being. Statistics show that Ireland has one of the highest rates of higher education participation in Europe. However, student retention and progression, especially in Institutes of Technology, is becoming an issue as rates on non-completion rise. Both within Ireland and across Europe student retention is seen as a key performance indicator for higher education and with these increasing rates the Irish higher education system needs to be flexible and adapt to the situation it now faces. The author is a Programme Chair on a Level 6 full time undergraduate programme and experience to date has shown that the first year undergraduate students take some time to identify themselves as a group within the setting of a higher education institute. Despite being part of a distinct class on a specific programme some individuals can feel isolated as he or she take the first step into higher education. Such feelings can contribute to students eventually dropping out. This paper reports on an ongoing initiative that aims to accelerate the bonding experience of a distinct group of first year undergraduates on a programme which has a high rate of non-completion. This research sought to engage the students in dynamic interactions with their peers to quickly evolve a group sense of coherence. Two separate modules – a Research Module and a Communications module - delivered by the researcher were linked across two semesters. Students were allocated into random groups and each group was given a topic to be researched. There were six topics – essentially the six sub-headings on the DIT Graduate Attribute Statement. The research took place in a computer lab and students also used the library. The output from this was a document that formed part of the submission for the Research Module. In the second semester the groups then had to make a presentation of their findings where each student spoke for a minimum amount of time. Presentation workshops formed part of that module and students were given the opportunity to practice their presentation skills. These presentations were video recorded to enable feedback to be given. Although this was a small scale study preliminary results found a strong sense of coherence among this particular cohort and feedback from the students was very positive. Other findings indicate that spreading the initiative across two semesters may have been an inhibitor. Future challenges include spreading such Initiatives College wide and indeed sector wide.

Keywords: first year experience, student retention, group work, presentation workshops

Procedia PDF Downloads 214
210 Life Cycle Assessment Applied to Supermarket Refrigeration System: Effects of Location and Choice of Architecture

Authors: Yasmine Salehy, Yann Leroy, Francois Cluzel, Hong-Minh Hoang, Laurence Fournaison, Anthony Delahaye, Bernard Yannou

Abstract:

Taking into consideration all the life cycle of a product is now an important step in the eco-design of a product or a technology. Life cycle assessment (LCA) is a standard tool to evaluate the environmental impacts of a system or a process. Despite the improvement in refrigerant regulation through protocols, the environmental damage of refrigeration systems remains important and needs to be improved. In this paper, the environmental impacts of refrigeration systems in a typical supermarket are compared using the LCA methodology under different conditions. The system is used to provide cold at two levels of temperature: medium and low temperature during a life period of 15 years. The most commonly used architectures of supermarket cold production systems are investigated: centralized direct expansion systems and indirect systems using a secondary loop to transport the cold. The variation of power needed during seasonal changes and during the daily opening/closure periods of the supermarket are considered. R134a as the primary refrigerant fluid and two types of secondary fluids are considered. The composition of each system and the leakage rate of the refrigerant through its life cycle are taken from the literature and industrial data. Twelve scenarios are examined. They are based on the variation of three parameters, 1. location: France (Paris), Spain (Toledo) and Sweden (Stockholm), 2. different sources of electric consumption: photovoltaic panels and low voltage electric network and 3. architecture: direct and indirect refrigeration systems. OpenLCA, SimaPro softwares, and different impact assessment methods were compared; CML method is used to evaluate the midpoint environmental indicators. This study highlights the significant contribution of electric consumption in environmental damages compared to the impacts of refrigerant leakage. The secondary loop allows lowering the refrigerant amount in the primary loop which results in a decrease in the climate change indicators compared to the centralized direct systems. However, an exhaustive cost evaluation (CAPEX and OPEX) of both systems shows more important costs related to the indirect systems. A significant difference between the countries has been noticed, mostly due to the difference in electric production. In Spain, using photovoltaic panels helps to reduce efficiently the environmental impacts and the related costs. This scenario is the best alternative compared to the other scenarios. Sweden is a country with less environmental impacts. For both France and Sweden, the use of photovoltaic panels does not bring a significant difference, due to a less sunlight exposition than in Spain. Alternative solutions exist to reduce the impact of refrigerating systems, and a brief introduction is presented.

Keywords: eco-design, industrial engineering, LCA, refrigeration system

Procedia PDF Downloads 151
209 Contributions of Natural and Human Activities to Urban Surface Runoff with Different Hydrological Scenarios (Orléans, France)

Authors: Al-Juhaishi Mohammed, Mikael Motelica-Heino, Fabrice Muller, Audrey Guirimand-Dufour, Christian Défarge

Abstract:

This study aims at improving the urban hydrological cycle of the Orléans agglomeration (France) and understanding the relationship between physical and chemical parameters of urban surface runoff and the hydrological conditions. In particular water quality parameters such as pH, conductivity, total dissolved solids, major dissolved cations and anions, and chemical and biological oxygen demands were monitored for three types of urban water discharges (wastewater treatment plant output (WWTP), storm overflow and stormwater outfall) under two hydrologic scenarii (dry and wet weather). The first results were obtained over a period of five months.Each investigated (Ormes and l’Egoutier) outfall represents an urban runoff source that receives water from runoff roads, gutters, the irrigation of gardens and other sources of flow over the Earth’s surface that drains in its catchments and carries it to the Loire River. In wet weather conditions there is rain water runoff and an additional input from the roof gutters that have entered the stormwater system during rainfall. For the comparison the results La Chilesse is a storm overflow that was selected in our study as a potential source of waste water which is located before the (WWTP).The comparison of the physical-chemical parameters (total dissolved solids, turbidity, pH, conductivity, dissolved organic carbon (DOC), concentration of major cations and anions) together with the chemical oxygen demand (COD) and biological oxygen demand (BOD) helped to characterize sources of runoff waters in the different watersheds. It also helped to highlight the infiltration of wastewater in some stormwater systems that reject directly in the Loire River. The values of the conductivity measured in the outflow of Ormes were always higher than those measured in the other two outlets. The results showed a temporal variation for the Ormes outfall of conductivity from 1465 µS cm-1 in the dry weather flow to 650 µS cm-1 in the wet weather flow and also a spatial variation in the wet weather flow from 650 µS cm-1 in the Ormes outfall to 281 μS cm-1 in L’Egouttier outfall. The ultimate BOD (BOD28) showed a significant decrease in La Corne outfall from 210 mg L-1 in the wet weather flow to 75 mg L-1 in the dry weather flow because of the nutrient load that was transported by the runoff.

Keywords: BOD, COD, the Loire River, urban hydrology, urban dry and wet weather discharges, macronutrients

Procedia PDF Downloads 246
208 Design, Numerical Simulation, Fabrication and Physical Experimentation of the Tesla’s Cohesion Type Bladeless Turbine

Authors: M.Sivaramakrishnaiah, D. S .Nasan, P. V. Subhanjeneyulu, J. A. Sandeep Kumar, N. Sreenivasulu, B. V. Amarnath Reddy, B. Veeralingam

Abstract:

Design, numerical simulation, fabrication, and physical experimentation of the Tesla’s Bladeless centripetal turbine for generating electrical power are presented in this research paper. 29 Pressurized air combined with water via a nozzle system is made to pass tangentially through a set of parallel smooth discs surfaces, which impart rotational motion to the discs fastened common shaft for the power generation. The power generated depends upon the fluid speed parameter leaving the nozzle inlet. Physically due to laminar boundary layer phenomena at smooth disc surface, the high speed fluid layers away from the plate moving against the low speed fluid layers nearer to the plate develop a tangential drag from the viscous shear forces. This compels the nearer layers to drag along with the high layers causing the disc to spin. Solid Works design software and fluid mechanics and machine elements design theories was used to compute mechanical design specifications of turbine parts like 48 mm diameter discs, common shaft, central exhaust, plenum chamber, swappable nozzle inlets, etc. Also, ANSYS CFX 2018 was used for the numerical 2 simulation of the physical phenomena encountered in the turbine working. When various numerical simulation and physical experimental results were verified, there is good agreement between them 6, both quantitatively and qualitatively. The sources of input and size of the blades may affect the power generated and turbine efficiency, respectively. The results may change if there is a change in the fluid flowing between the discs. The inlet fluid pressure versus turbine efficiency and the number of discs versus turbine power studies based on both results were carried out to develop the 8 relationships between the inlet and outlet parameters of the turbine. The present research work obtained the turbine efficiency in the range of 7-10%, and for this range; the electrical power output generated was 50-60 W.

Keywords: tesla turbine, cohesion type bladeless turbine, boundary layer theory, cohesion type bladeless turbine, tangential fluid flow, viscous and adhesive forces, plenum chamber, pico hydro systems

Procedia PDF Downloads 63
207 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions

Authors: Erva Akin

Abstract:

– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.

Keywords: artificial intelligence, copyright, data governance, machine learning

Procedia PDF Downloads 60
206 Vibrational Spectra and Nonlinear Optical Investigations of a Chalcone Derivative (2e)-3-[4-(Methylsulfanyl) Phenyl]-1-(3-Bromophenyl) Prop-2-En-1-One

Authors: Amit Kumar, Archana Gupta, Poonam Tandon, E. D. D’Silva

Abstract:

Nonlinear optical (NLO) materials are the key materials for the fast processing of information and optical data storage applications. In the last decade, materials showing nonlinear optical properties have been the object of increasing attention by both experimental and computational points of view. Chalcones are one of the most important classes of cross conjugated NLO chromophores that are reported to exhibit good SHG efficiency, ultra fast optical nonlinearities and are easily crystallizable. The basic structure of chalcones is based on the π-conjugated system in which two aromatic rings are connected by a three-carbon α, β-unsaturated carbonyl system. Due to the overlap of π orbitals, delocalization of electronic charge distribution leads to a high mobility of the electron density. On a molecular scale, the extent of charge transfer across the NLO chromophore determines the level of SHG output. Hence, the functionalization of both ends of the π-bond system with appropriate electron donor and acceptor groups can enhance the asymmetric electronic distribution in either or both ground and excited states, leading to an increased optical nonlinearity. In this research, the experimental and theoretical study on the structure and vibrations of (2E)-3-[4-(methylsulfanyl) phenyl]-1-(3-bromophenyl) prop-2-en-1-one (3Br4MSP) is presented. The FT-IR and FT-Raman spectra of the NLO material in the solid phase have been recorded. Density functional theory (DFT) calculations at B3LYP with 6-311++G(d,p) basis set were carried out to study the equilibrium geometry, vibrational wavenumbers, infrared absorbance and Raman scattering activities. The interpretation of vibrational features (normal mode assignments, for instance) has an invaluable aid from DFT calculations that provide a quantum-mechanical description of the electronic energies and forces involved. Perturbation theory allows one to obtain the vibrational normal modes by estimating the derivatives of the Kohn−Sham energy with respect to atomic displacements. The molecular hyperpolarizability β plays a chief role in the NLO properties, and a systematical study on β has been carried out. Furthermore, the first order hyperpolarizability (β) and the related properties such as dipole moment (μ) and polarizability (α) of the title molecule are evaluated by Finite Field (FF) approach. The electronic α and β of the studied molecule are 41.907×10-24 and 79.035×10-24 e.s.u. respectively, indicating that 3Br4MSP can be used as a good nonlinear optical material.

Keywords: DFT, MEP, NLO, vibrational spectra

Procedia PDF Downloads 199
205 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process

Authors: Johannes Gantner, Michael Held, Matthias Fischer

Abstract:

The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.

Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation

Procedia PDF Downloads 265
204 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass

Authors: Ricardo Torcato, Helder Morais

Abstract:

The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.

Keywords: CNC machining, crystal glass, cutting forces, hardness

Procedia PDF Downloads 129
203 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 62
202 The Effect of Acute Consumption of a Nutritional Supplement Derived from Vegetable Extracts Rich in Nitrate on Athletic Performance

Authors: Giannis Arnaoutis, Dimitra Efthymiopoulou, Maria-Foivi Nikolopoulou, Yannis Manios

Abstract:

AIM: Nitrate-containing supplements have been used extensively as ergogenic in many sports. However, extract fractions from plant-based nutritional sources high in nitrate and their effect on athletic performance, has not been systematically investigated. The purpose of the present study was to examine the possible effect of acute consumption of a “smart mixture” from beetroot and rocket on exercise capacity. MATERIAL & METHODS: 12 healthy, nonsmoking, recreationally active, males (age: 25±4 years, % fat: 15.5±5.7, Fat Free Mass: 65.8±5.6 kg, VO2 max: 45.46.1 mL . kg -1 . min -1) participated in a double-blind, placebo-controlled trial study, in a randomized and counterbalanced order. Eligibility criteria for participation in this study included normal physical examination, and absence of any metabolic, cardiovascular, or renal disease. All participants completed a time to exhaustion cycling test at 75% of their maximum power output, twice. The subjects consumed either capsules containing 360 mg of nitrate in total or placebo capsules, in the morning, under fasted state. After 3h of passive recovery the performance test followed. Blood samples were collected upon arrival of the participants and 3 hours after the consumption of the corresponding capsules. Time until exhaustion, pre- and post-test lactate concentrations, and rate of perceived exertion for the same time points were assessed. RESULTS: Paired-sample t-test analysis found a significant difference in time to exhaustion between the trial with the nitrate consumption versus placebo [16.1±3.0 Vs 13.5±2.6 min, p=0.04] respectively. No significant differences were observed for the concentrations of lactic acid as well as for the values in the Borg scale between the two trials (p>0.05). CONCLUSIONS: Based on the results of the present study, it appears that a nutritional supplement derived from vegetable extracts rich in nitrate, improves athletic performance in recreationally active young males. However, the precise mechanism is not clear and future studies are needed. Acknowledgment: This research has been co‐financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE (project code:T2EDK-00843).

Keywords: sports performance, ergogenic supplements, nitrate, extract fractions

Procedia PDF Downloads 45
201 Numerical Reproduction of Hemodynamic Change Induced by Acupuncture to ST-36

Authors: Takuya Suzuki, Atsushi Shirai, Takashi Seki

Abstract:

Acupuncture therapy is one of the treatments in traditional Chinese medicine. Recently, some reports have shown the effectiveness of acupuncture. However, its full acceptance has been hindered by the lack of understanding on mechanism of the therapy. Acupuncture applied to Zusanli (ST-36) enhances blood flow volume in superior mesenteric artery (SMA), yielding peripheral vascular resistance – regulated blood flow of SMA dominated by the parasympathetic system and inhibition of sympathetic system. In this study, a lumped-parameter approximation model of blood flow in the systemic arteries was developed. This model was extremely simple, consisting of the aorta, carotid arteries, arteries of the four limbs and SMA, and their peripheral vascular resistances. Here, the individual artery was simplified to a tapered tube and the resistances were modelled by a linear resistance. We numerically investigated contribution of the peripheral vascular resistance of SMA to the systemic blood distribution using this model. In addition to the upstream end of the model, which correlates with the left ventricle, two types of boundary condition were applied; mean left ventricular pressure which correlates with blood pressure (BP) and mean cardiac output which corresponds to cardiac index (CI). We examined it to reproduce the experimentally obtained hemodynamic change, in terms of the ratio of the aforementioned hemodynamic parameters from their initial values before the acupuncture, by regulating the peripheral vascular resistances and the upstream boundary condition. First, only the peripheral vascular resistance of SMA was changed to show contribution of the resistance to the change in blood flow volume in SMA, expecting reproduction of the experimentally obtained change. It was found, however, this was not enough to reproduce the experimental result. Then, we also changed the resistances of the other arteries together with the value given at upstream boundary. Here, the resistances of the other arteries were changed simultaneously in the same amount. Consequently, we successfully reproduced the hemodynamic change to find that regulation of the upstream boundary condition to the value experimentally obtained after the stimulation is necessary for the reproduction, though statistically significant changes in BP and CI were not observed in the experiment. It is generally known that sympathetic and parasympathetic tones take part in regulation of whole the systemic circulation including the cardiac function. The present result indicates that stimulation to ST-36 could induce vasodilation of peripheral circulation of SMA and vasoconstriction of that of other arteries. In addition, it implies that experimentally obtained small changes in BP and CI induced by the acupuncture may be involved in the therapeutic response.

Keywords: acupuncture, hemodynamics, lumped-parameter approximation, modeling, systemic vascular resistance

Procedia PDF Downloads 206
200 Electronic Waste Analysis And Characterization Study: Management Input For Highly Urbanized Cities

Authors: Jilbert Novelero, Oliver Mariano

Abstract:

In a world where technological evolution and competition to create innovative products are at its peak, problems on Electronic Waste (E-Waste) are now becoming a global concern. E-waste is said to be any electrical or electronic devices that have reached the terminal of its useful life. The major issue are the volume and the raw materials used in crafting E-waste which is non-biodegradable and contains hazardous substances that are toxic to human health and the environment. The objective of this study is to gather baseline data in terms of the composition of E-waste in the solid waste stream and to determine the top 5 E-waste categories in a highly urbanized city. Recommendations in managing these wastes for its reduction were provided which may serve as a guide for acceptance and implementation in the locality. Pasig City was the chosen beneficiary of the research output and through the collaboration of the City Government of Pasig and its Solid Waste Management Office (SWMO); the researcher successfully conducted the Electronic Waste Analysis and Characterization Study (E-WACS) to achieve the objectives. E-WACS that was conducted on April 2019 showed that E-waste ranked 4th which comprises the 10.39% of the overall solid waste volume. Out of 345, 127.24kg which is the total daily domestic waste generation in the city, E-waste covers 35,858.72kg. Moreover, an average of 40 grams was determined to be the E-waste generation per person per day. The top 5 E-waste categories were then classified after the analysis. The category which ranked first is the office and telecommunications equipment that contained the 63.18% of the total generated E-waste. Second in ranking was the household appliances category with 21.13% composition. Third was the lighting devices category with 8.17%. Fourth on ranking was the consumer electronics and batteries category which was composed of 5.97% and fifth was the wires and cables category where it comprised the 1.41% of the average generated E-waste samples. One of the recommendations provided in this research is the implementation of the Pasig City Waste Advantage Card. The card can be used as a privilege card and earned points can be converted to avail of and enjoy services such as haircut, massage, dental services, medical check-up, and etc. Another recommendation raised is for the LGU to encourage a communication or dialogue with the technology and electronics manufacturers and distributors and international and local companies to plan the retrieval and disposal of the E-wastes in accordance with the Extended Producer Responsibility (EPR) policy where producers are given significant responsibilities for the treatment and disposal of post-consumer products.

Keywords: E-waste, E-WACS, E-waste characterization, electronic waste, electronic waste analysis

Procedia PDF Downloads 97
199 Design, Development and Testing of Polymer-Glass Microfluidic Chips for Electrophoretic Analysis of Biological Sample

Authors: Yana Posmitnaya, Galina Rudnitskaya, Tatyana Lukashenko, Anton Bukatin, Anatoly Evstrapov

Abstract:

An important area of biological and medical research is the study of genetic mutations and polymorphisms that can alter gene function and cause inherited diseases and other diseases. The following methods to analyse DNA fragments are used: capillary electrophoresis and electrophoresis on microfluidic chip (MFC), mass spectrometry with electrophoresis on MFC, hybridization assay on microarray. Electrophoresis on MFC allows to analyse small volumes of samples with high speed and throughput. A soft lithography in polydimethylsiloxane (PDMS) was chosen for operative fabrication of MFCs. A master-form from silicon and photoresist SU-8 2025 (MicroChem Corp.) was created for the formation of micro-sized structures in PDMS. A universal topology which combines T-injector and simple cross was selected for the electrophoretic separation of the sample. Glass K8 and PDMS Sylgard® 184 (Dow Corning Corp.) were used for fabrication of MFCs. Electroosmotic flow (EOF) plays an important role in the electrophoretic separation of the sample. Therefore, the estimate of the quantity of EOF and the ways of its regulation are of interest for the development of the new methods of the electrophoretic separation of biomolecules. The following methods of surface modification were chosen to change EOF: high-frequency (13.56 MHz) plasma treatment in oxygen and argon at low pressure (1 mbar); 1% aqueous solution of polyvinyl alcohol; 3% aqueous solution of Kolliphor® P 188 (Sigma-Aldrich Corp.). The electroosmotic mobility was evaluated by the method of Huang X. et al., wherein the borate buffer was used. The influence of physical and chemical methods of treatment on the wetting properties of the PDMS surface was controlled by the sessile drop method. The most effective way of surface modification of MFCs, from the standpoint of obtaining the smallest value of the contact angle and the smallest value of the EOF, was the processing with aqueous solution of Kolliphor® P 188. This method of modification has been selected for the treatment of channels of MFCs, which are used for the separation of mixture of oligonucleotides fluorescently labeled with the length of chain with 10, 20, 30, 40 and 50 nucleotides. Electrophoresis was performed on the device MFAS-01 (IAI RAS, Russia) at the separation voltage of 1500 V. 6% solution of polydimethylacrylamide with the addition of 7M carbamide was used as the separation medium. The separation time of components of the mixture was determined from electropherograms. The time for untreated MFC was ~275 s, and for the ones treated with solution of Kolliphor® P 188 – ~ 220 s. Research of physical-chemical methods of surface modification of MFCs allowed to choose the most effective way for reducing EOF – the modification with aqueous solution of Kolliphor® P 188. In this case, the separation time of the mixture of oligonucleotides decreased about 20%. The further optimization of method of modification of channels of MFCs will allow decreasing the separation time of sample and increasing the throughput of analysis.

Keywords: electrophoresis, microfluidic chip, modification, nucleic acid, polydimethylsiloxane, soft lithography

Procedia PDF Downloads 387
198 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 105
197 A Method against Obsolescence of Three-Dimensional Archaeological Collection. Two Cases of Study from Qubbet El-Hawa Necropolis, Aswan, Egypt

Authors: L. Serrano-Lara, J.M Alba-Gómez

Abstract:

Qubbet el–Hawa Project has been documented archaeological artifacts as 3d models by laser scanning technique since 2015. Currently, research has obtained the right methodology to develop a high accuracy photographic texture for each geometrical 3D model. Furthermore, the right methodology to attach the complete digital surrogate into a 3DPDF document has been obtained; it is used as a catalogue worksheet that brings archaeological data and, at the same time, allows us to obtain precise measurements, volume calculations and cross-section mapping of each scanned artifact. This validated archaeological documentation is the first step for dissemination, application as Qubbet el-Hawa Virtual Museum, and, moreover, multi-sensory experience through 3D print archaeological artifacts. Material culture from four funerary complexes constructed in West Aswan has become physical replicas opening the archaeological research process itself and offering creative possibilities on museology or educational projects. This paper shares a method of acquiring texture for scanning´s output product in order to achieve a 3DPDF archaeological cataloguing, and, on the other hand, to allow the colorfully 3D printing of singular archaeological artifacts. The proposed method has undergone two concrete cases, a polychrome wooden ushabti, and, a cartonnage mask belonging to a lady, bought recovered on intact tomb QH34aa. Both 3D model results have been implemented on three main applications, archaeological 3D catalogue, public dissemination activities, and the 3D artifact model in a bachelor education program. Due to those three already mentioned applications, productive interaction among spectator and three-dimensional artifact have been increased; moreover, functionality as archaeological documentation has been consolidated. Finding the right methodology to assign a specific color to each vector on the geometric 3D model, we had been achieved two essential archaeological applications. Firstly, 3DPDF as a display document for an archaeological catalogue, secondly, the possibility to obtain a colored 3d printed object to be displayed in public exhibitions. Obsolescences 3D models have become updated archaeological documentation of QH43aa tomb cultural material. Therefore, Qubbet el-Hawa Project has been actualized the educational potential of its results thanks to a multi-sensory experience that arose from 3d scanned´s archaeological artifacts.

Keywords: 3D printed, 3D scanner, Middle Kingdom, Qubbet el-Hawa necropolis, virtual archaeology

Procedia PDF Downloads 114
196 Optimization Based Design of Decelerating Duct for Pumpjets

Authors: Mustafa Sengul, Enes Sahin, Sertac Arslan

Abstract:

Pumpjets are one of the marine propulsion systems frequently used in underwater vehicles nowadays. The reasons for frequent use of pumpjet as a propulsion system are that it has higher relative efficiency at high speeds, better cavitation, and acoustic performance than its rivals. Pumpjets are composed of rotor, stator, and duct, and there are two different types of pumpjet configurations depending on the desired hydrodynamic characteristic, which are with accelerating and decelerating duct. Pumpjet with an accelerating channel is used at cargo ships where it works at low speeds and high loading conditions. The working principle of this type of pumpjet is to maximize the thrust by reducing the pressure of the fluid through the channel and throwing the fluid out from the channel with high momentum. On the other hand, for decelerating ducted pumpjets, the main consideration is to prevent the occurrence of the cavitation phenomenon by increasing the pressure of the fluid about the rotor region. By postponing the cavitation, acoustic noise naturally falls down, so decelerating ducted systems are used at noise-sensitive vehicle systems where acoustic performance is vital. Therefore, duct design becomes a crucial step during pumpjet design. This study, it is aimed to optimize the duct geometry of a decelerating ducted pumpjet for a highly speed underwater vehicle by using proper optimization tools. The target output of this optimization process is to obtain a duct design that maximizes fluid pressure around the rotor region to prevent from cavitation and minimizes drag force. There are two main optimization techniques that could be utilized for this process which are parameter-based optimization and gradient-based optimization. While parameter-based algorithm offers more major changes in interested geometry, which makes user to get close desired geometry, gradient-based algorithm deals with minor local changes in geometry. In parameter-based optimization, the geometry should be parameterized first. Then, by defining upper and lower limits for these parameters, design space is created. Finally, by proper optimization code and analysis, optimum geometry is obtained from this design space. For this duct optimization study, a commercial codedparameter-based optimization algorithm is used. To parameterize the geometry, duct is represented with b-spline curves and control points. These control points have x and y coordinates limits. By regarding these limits, design space is generated.

Keywords: pumpjet, decelerating duct design, optimization, underwater vehicles, cavitation, drag minimization

Procedia PDF Downloads 174
195 Low Cost Webcam Camera and GNSS Integration for Updating Home Data Using AI Principles

Authors: Mohkammad Nur Cahyadi, Hepi Hapsari Handayani, Agus Budi Raharjo, Ronny Mardianto, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

PDAM (local water company) determines customer charges by considering the customer's building or house. Charges determination significantly affects PDAM income and customer costs because the PDAM applies a subsidy policy for customers classified as small households. Periodic updates are needed so that pricing is in line with the target. A thorough customer survey in Surabaya is needed to update customer building data. However, the survey that has been carried out so far has been by deploying officers to conduct one-by-one surveys for each PDAM customer. Surveys with this method require a lot of effort and cost. For this reason, this research offers a technology called moblie mapping, a mapping method that is more efficient in terms of time and cost. The use of this tool is also quite simple, where the device will be installed in the car so that it can record the surrounding buildings while the car is running. Mobile mapping technology generally uses lidar sensors equipped with GNSS, but this technology requires high costs. In overcoming this problem, this research develops low-cost mobile mapping technology using a webcam camera sensor added to the GNSS and IMU sensors. The camera used has specifications of 3MP with a resolution of 720 and a diagonal field of view of 78⁰. The principle of this invention is to integrate four camera sensors, a GNSS webcam, and GPS to acquire photo data, which is equipped with location data (latitude, longitude) and IMU (roll, pitch, yaw). This device is also equipped with a tripod and a vacuum cleaner to attach to the car's roof so it doesn't fall off while running. The output data from this technology will be analyzed with artificial intelligence to reduce similar data (Cosine Similarity) and then classify building types. Data reduction is used to eliminate similar data and maintain the image that displays the complete house so that it can be processed for later classification of buildings. The AI method used is transfer learning by utilizing a trained model named VGG-16. From the analysis of similarity data, it was found that the data reduction reached 50%. Then georeferencing is done using the Google Maps API to get address information according to the coordinates in the data. After that, geographic join is done to link survey data with customer data already owned by PDAM Surya Sembada Surabaya.

Keywords: mobile mapping, GNSS, IMU, similarity, classification

Procedia PDF Downloads 57
194 Influence of Infrared Radiation on the Growth Rate of Microalgae Chlorella sorokiniana

Authors: Natalia Politaeva, Iuliia Smiatskaia, Iuliia Bazarnova, Iryna Atamaniuk, Kerstin Kuchta

Abstract:

Nowadays, the progressive decrease of primary natural resources and ongoing upward trend in terms of energy demand, have resulted in development of new generation technological processes which are focused on step-wise production and residues utilization. Thus, microalgae-based 3rd generation bioeconomy is considered one of the most promising approaches that allow production of value-added products and sophisticated utilization of residues biomass. In comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, and thus, addressing issues associated with negative social and environmental impacts. However, one of the most challenging tasks is to undergo seasonal variations and to achieve optimal growing conditions for indoor closed systems that can cover further demand for material and energetic utilization of microalgae. For instance, outdoor cultivation in St. Petersburg (Russia) is only suitable within rather narrow time frame (from mid-May to mid-September). At earlier and later periods, insufficient sunlight and heat for the growth of microalgae were detected. On the other hand, without additional physical effects, the biomass increment in summer is 3-5 times per week, depending on the solar radiation and the ambient temperature. In order to increase biomass production, scientists from all over the world have proposed various technical solutions for cultivators and have been studying the influence of various physical factors affecting biomass growth namely: magnetic field, radiation impact, and electric field, etc. In this paper, the influence of infrared radiation (IR) and fluorescent light on the growth rate of microalgae Chlorella sorokiniana has been studied. The cultivation of Chlorella sorokiniana was carried out in 500 ml cylindrical glass vessels, which were constantly aerated. To accelerate the cultivation process, the mixture was stirred for 15 minutes at 500 rpm following 120 minutes of rest time. At the same time, the metabolic needs in nutrients were provided by the addition of micro- and macro-nutrients in the microalgae growing medium. Lighting was provided by fluorescent lamps with the intensity of 2500 ± 300 lx. The influence of IR was determined using IR lamps with a voltage of 220 V, power of 250 W, in order to achieve the intensity of 13 600 ± 500 lx. The obtained results show that under the influence of fluorescent lamps along with the combined effect of active aeration and variable mixing, the biomass increment on the 2nd day was three times, and on the 7th day, it was eight-fold. The growth rate of microalgae under the influence of IR radiation was lower and has reached 22.6·106 cells·mL-1. However, application of IR lamps for the biomass growth allows maintaining the optimal temperature of microalgae suspension at approximately 25-28°C, which might especially be beneficial during the cold season in extreme climate zones.

Keywords: biomass, fluorescent lamp, infrared radiation, microalgae

Procedia PDF Downloads 170
193 Development and Structural Characterization of a Snack Food with Added Type 4 Extruded Resistant Starch

Authors: Alberto A. Escobar Puentes, G. Adriana García, Luis F. Cuevas G., Alejandro P. Zepeda, Fernando B. Martínez, Susana A. Rincón

Abstract:

Snack foods are usually classified as ‘junk food’ because have little nutritional value. However, due to the increase on the demand and third generation (3G) snacks market, low price and easy to prepare, can be considered as carriers of compounds with certain nutritional value. Resistant starch (RS) is classified as a prebiotic fiber it helps to control metabolic problems and has anti-cancer colon properties. The active compound can be developed by chemical cross-linking of starch with phosphate salts to obtain a type 4 resistant starch (RS4). The chemical reaction can be achieved by extrusion, a process widely used to produce snack foods, since it's versatile and a low-cost procedure. Starch is the major ingredient for snacks 3G manufacture, and the seeds of sorghum contain high levels of starch (70%), the most drought-tolerant gluten-free cereal. Due to this, the aim of this research was to develop a snack (3G), with RS4 in optimal conditions extrusion (previously determined) from sorghum starch, and carry on a sensory, chemically and structural characterization. A sample (200 g) of sorghum starch was conditioned with 4% sodium trimetaphosphate/ sodium tripolyphosphate (99:1) and set to 28.5% of moisture content. Then, the sample was processed in a single screw extruder equipped with rectangular die. The inlet, transport and output temperatures were 60°C, 134°C and 70°C, respectively. The resulting pellets were expanded in a microwave oven. The expansion index (EI), penetration force (PF) and sensory analysis were evaluated in the expanded pellets. The pellets were milled to obtain flour and RS content, degree of substitution (DS), and percentage of phosphorus (% P) were measured. Spectroscopy [Fourier Transform Infrared (FTIR)], X-ray diffraction, differential scanning calorimetry (DSC) and scanning electron microscopy (SEM) analysis were performed in order to determine structural changes after the process. The results in 3G were as follows: RS, 17.14 ± 0.29%; EI, 5.66 ± 0.35 and PF, 5.73 ± 0.15 (N). Groups of phosphate were identified in the starch molecule by FTIR: DS, 0.024 ± 0.003 and %P, 0.35±0.15 [values permitted as food additives (<4 %P)]. In this work an increase of the gelatinization temperature after the crosslinking of starch was detected; the loss of granular and vapor bubbles after expansion were observed by SEM; By using X-ray diffraction, loss of crystallinity was observed after extrusion process. Finally, a snack (3G) was obtained with RS4 developed by extrusion technology. The sorghum starch was efficient for snack 3G production.

Keywords: extrusion, resistant starch, snack (3G), Sorghum

Procedia PDF Downloads 288
192 Modification of a Commercial Ultrafiltration Membrane by Electrospray Deposition for Performance Adjustment

Authors: Elizaveta Korzhova, Sebastien Deon, Patrick Fievet, Dmitry Lopatin, Oleg Baranov

Abstract:

Filtration with nanoporous ultrafiltration membranes is an attractive option to remove ionic pollutants from contaminated effluents. Unfortunately, commercial membranes are not necessarily suitable for specific applications, and their modification by polymer deposition is a fruitful way to adapt their performances accordingly. Many methods are usually used for surface modification, but a novel technique based on electrospray is proposed here. Various quantities of polymers were deposited on a commercial membrane, and the impact of the deposit is investigated on filtration performances and discussed in terms of charge and hydrophobicity. The electrospray deposition is a technique which has not been used for membrane modification up to now. It consists of spraying small drops of polymer solution under a high voltage between the needle containing the solution and the metallic support on which membrane is stuck. The advantage of this process lies in the small quantities of polymer that can be coated on the membrane surface compared with immersion technique. In this study, various quantities (from 2 to 40 μL/cm²) of solutions containing two charged polymers (13 mmol/L of monomer unit), namely polyethyleneimine (PEI) and polystyrene sulfonate (PSS), were sprayed on a negatively charged polyethersulfone membrane (PLEIADE, Orelis Environment). The efficacy of the polymer deposition was then investigated by estimating ion rejection, permeation flux, zeta-potential and contact angle before and after the polymer deposition. Firstly, contact angle (θ) measurements show that the surface hydrophilicity is notably improved by coating both PEI and PSS. Moreover, it was highlighted that the contact angle decreases monotonously with the amount of sprayed solution. Additionally, hydrophilicity enhancement was proved to be better with PSS (from 62 to 35°) than PEI (from 62 to 53°). Values of zeta-potential (ζ were estimated by measuring the streaming current generated by a pressure difference on both sides of a channel made by clamping two membranes. The ζ-values demonstrate that the deposits of PSS (negative at pH=5.5) allow an increase of the negative membrane charge, whereas the deposits of PEI (positive) lead to a positive surface charge. Zeta-potentials measurements also emphasize that the sprayed quantity has little impact on the membrane charge, except for very low quantities (2 μL/m²). The cross-flow filtration of salt solutions containing mono and divalent ions demonstrate that polymer deposition allows a strong enhancement of ion rejection. For instance, it is shown that rejection of a salt containing a divalent cation can be increased from 1 to 20 % and even to 35% by deposing 2 and 4 μL/cm² of PEI solution, respectively. This observation is coherent with the reversal of the membrane charge induced by PEI deposition. Similarly, the increase of negative charge induced by PSS deposition leads to an increase of NaCl rejection from 5 to 45 % due to electrostatic repulsion of the Cl- ion by the negative surface charge. Finally, a notable fall in the permeation flux due to the polymer layer coated at the surface was observed and the best polymer concentration in the sprayed solution remains to be determined to optimize performances.

Keywords: ultrafiltration, electrospray deposition, ion rejection, permeation flux, zeta-potential, hydrophobicity

Procedia PDF Downloads 167
191 An Introduction to the Radiation-Thrust Based on Alpha Decay and Spontaneous Fission

Authors: Shiyi He, Yan Xia, Xiaoping Ouyang, Liang Chen, Zhongbing Zhang, Jinlu Ruan

Abstract:

As the key system of the spacecraft, various propelling system have been developing rapidly, including ion thrust, laser thrust, solar sail and other micro-thrusters. However, there still are some shortages in these systems. The ion thruster requires the high-voltage or magnetic field to accelerate, resulting in extra system, heavy quantity and large volume. The laser thrust now is mostly ground-based and providing pulse thrust, restraint by the station distribution and the capacity of laser. The thrust direction of solar sail is limited to its relative position with the Sun, so it is hard to propel toward the Sun or adjust in the shadow.In this paper, a novel nuclear thruster based on alpha decay and spontaneous fission is proposed and the principle of this radiation-thrust with alpha particle has been expounded. Radioactive materials with different released energy, such as 210Po with 5.4MeV and 238Pu with 5.29MeV, attached to a metal film will provides various thrust among 0.02-5uN/cm2. With this repulsive force, radiation is able to be a power source. With the advantages of low system quantity, high accuracy and long active time, the radiation thrust is promising in the field of space debris removal, orbit control of nano-satellite array and deep space exploration. To do further study, a formula lead to the amplitude and direction of thrust by the released energy and decay coefficient is set up. With the initial formula, the alpha radiation elements with the half life period longer than a hundred days are calculated and listed. As the alpha particles emit continuously, the residual charge in metal film grows and affects the emitting energy distribution of alpha particles. With the residual charge or extra electromagnetic field, the emitting of alpha particles performs differently and is analyzed in this paper. Furthermore, three more complex situations are discussed. Radiation element generating alpha particles with several energies in different intensity, mixture of various radiation elements, and cascaded alpha decay are studied respectively. In combined way, it is more efficient and flexible to adjust the thrust amplitude. The propelling model of the spontaneous fission is similar with the one of alpha decay, which has a more complex angular distribution. A new quasi-sphere space propelling system based on the radiation-thrust has been introduced, as well as the collecting and processing system of excess charge and reaction heat. The energy and spatial angular distribution of emitting alpha particles on unit area and certain propelling system have been studied. As the alpha particles are easily losing energy and self-absorb, the distribution is not the simple stacking of each nuclide. With the change of the amplitude and angel of radiation-thrust, orbital variation strategy on space debris removal is shown and optimized.

Keywords: alpha decay, angular distribution, emitting energy, orbital variation, radiation-thruster

Procedia PDF Downloads 175
190 Flow Field Optimization for Proton Exchange Membrane Fuel Cells

Authors: Xiao-Dong Wang, Wei-Mon Yan

Abstract:

The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.

Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection

Procedia PDF Downloads 276
189 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust

Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin

Abstract:

The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.

Keywords: acoustic impedance, engine exhaust system, FEM model, test stand

Procedia PDF Downloads 31
188 Comparative Investigation of Two Non-Contact Prototype Designs Based on a Squeeze-Film Levitation Approach

Authors: A. Almurshedi, M. Atherton, C. Mares, T. Stolarski, M. Miyatake

Abstract:

Transportation and handling of delicate and lightweight objects is currently a significant issue in some industries. Two common contactless movement prototype designs, ultrasonic transducer design and vibrating plate design, are compared. Both designs are based on the method of squeeze-film levitation, and this study aims to identify the limitations, and challenges of each. The designs are evaluated in terms of levitation capabilities, and characteristics. To this end, theoretical and experimental explorations are made. It is demonstrated that the ultrasonic transducer prototype design is better suited to the terms of levitation capabilities. However, the design has some operating and mechanical designing difficulties. For making accurate industrial products in micro-fabrication and nanotechnology contexts, such as semiconductor silicon wafers, micro-components and integrated circuits, non-contact oil-free, ultra-precision and low wear transport along the production line is crucial for enabling. One of the designs (design A) is called the ultrasonic chuck, for which an ultrasonic transducer (Langevin, FBI 28452 HS) comprises the main part. Whereas the other (design B), is a vibrating plate design, which consists of a plain rectangular plate made of Aluminium firmly fastened at both ends. The size of the rectangular plate is 200x100x2 mm. In addition, four rounded piezoelectric actuators of size 28 mm diameter with 0.5 mm thickness are glued to the underside of the plate. The vibrating plate is clamped at both ends in the horizontal plane through a steel supporting structure. In addition, the dynamic of levitation using the designs (A and B) has been investigated based on the squeeze film levitation (SFL). The input apparatus that is used with designs consist of a sine wave signal generator connected to an amplifier type ENP-1-1U (Echo Electronics). The latter has to be utilised to magnify the sine wave voltage that is produced by the signal generator. The measurements of the maximum levitation for three different semiconductor wafers of weights 52, 70 and 88 [g] for design A are 240, 205 and 187 [um], respectively. Whereas the physical results show that the average separation distance for a disk of 5 [g] weight for design B reaches 70 [um]. By using the methodology of squeeze film levitation, it is possible to hold an object in a non-contact manner. The analyses of the investigation outcomes signify that the non-contact levitation of design A provides more improvement than design B. However, design A is more complicated than design B in terms of its manufacturing. In order to identify an adequate non-contact SFL design, a comparison between two common such designs has been adopted for the current investigation. Specifically, the study will involve making comparisons in terms of the following issues: floating component geometries and material type constraints; final created pressure distributions; dangerous interactions with the surrounding space; working environment constraints; and complication and compactness of the mechanical design. Considering all these matters is essential for proficiently distinguish the better SFL design.

Keywords: ANSYS, floating, piezoelectric, squeeze-film

Procedia PDF Downloads 125
187 Improvement of Electric Aircraft Endurance through an Optimal Propeller Design Using Combined BEM, Vortex and CFD Methods

Authors: Jose Daniel Hoyos Giraldo, Jesus Hernan Jimenez Giraldo, Juan Pablo Alvarado Perilla

Abstract:

Range and endurance are the main limitations of electric aircraft due to the nature of its source of power. The improvement of efficiency on this kind of systems is extremely meaningful to encourage the aircraft operation with less environmental impact. The propeller efficiency highly affects the overall efficiency of the propulsion system; hence its optimization can have an outstanding effect on the aircraft performance. An optimization method is applied to an aircraft propeller in order to maximize its range and endurance by estimating the best combination of geometrical parameters such as diameter and airfoil, chord and pitch distribution for a specific aircraft design at a certain cruise speed, then the rotational speed at which the propeller operates at minimum current consumption is estimated. The optimization is based on the Blade Element Momentum (BEM) method, additionally corrected to account for tip and hub losses, Mach number and rotational effects; furthermore an airfoil lift and drag coefficients approximation is implemented from Computational Fluid Dynamics (CFD) simulations supported by preliminary studies of grid independence and suitability of different turbulence models, to feed the BEM method, with the aim of achieve more reliable results. Additionally, Vortex Theory is employed to find the optimum pitch and chord distribution to achieve a minimum induced loss propeller design. Moreover, the optimization takes into account the well-known brushless motor model, thrust constraints for take-off runway limitations, maximum allowable propeller diameter due to aircraft height and maximum motor power. The BEM-CFD method is validated by comparing its predictions for a known APC propeller with both available experimental tests and APC reported performance curves which are based on Vortex Theory fed with the NASA Transonic Airfoil code, showing a adequate fitting with experimental data even more than reported APC data. Optimal propeller predictions are validated by wind tunnel tests, CFD propeller simulations and a study of how the propeller will perform if it replaces the one of on known aircraft. Some tendency charts relating a wide range of parameters such as diameter, voltage, pitch, rotational speed, current, propeller and electric efficiencies are obtained and discussed. The implementation of CFD tools shows an improvement in the accuracy of BEM predictions. Results also showed how a propeller has higher efficiency peaks when it operates at high rotational speed due to the higher Reynolds at which airfoils present lower drag. On the other hand, the behavior of the current consumption related to the propulsive efficiency shows counterintuitive results, the best range and endurance is not necessary achieved in an efficiency peak.

Keywords: BEM, blade design, CFD, electric aircraft, endurance, optimization, range

Procedia PDF Downloads 86
186 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 87
185 Optimal Applications of Solar Energy Systems: Comparative Analysis of Ground-Mounted and Rooftop Solar PV Installations in Drought-Prone and Residential Areas of the Indian Subcontinent

Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhyay

Abstract:

The increasing demand for environmentally friendly energy solutions highlights the need to optimize solar energy systems. This study compares two types of solar energy systems: ground-mounted solar panels for drought-prone locations and rooftop solar PV installations measuring 300 sq. ft. (approx. 28 sq. m.). The electricity output of 4730 kWh/year saves ₹ 14191/year. As a clean and sustainable energy source, solar power is pivotal in reducing greenhouse gas CO2 emissions reduction by 85 tonnes in 25 years and combating climate change. This effort, "PM Suryadaya Ghar-Muft Bijli Yojana," seeks to empower Indian homes by giving free access to solar energy. The initiative is part of the Indian government's larger attempt to encourage clean and renewable energy sources while reducing reliance on traditional fossil fuels. This report reviews various installations and government reports to analyse the performance and impact of both ground-mounted and rooftop solar systems. Besides, effectiveness of government subsidy programs for residential on-grid solar systems, including the ₹78,000 incentive for systems above 3 kW. The study also looks into the subsidy schemes available for domestic agricultural grid use. Systems up to 3 kW receive ₹43,764, while systems over 10 kW receive a fixed subsidy of ₹94,822. Households can save a substantial amount of energy and minimize their reliance on grid electricity by installing the proper solar plant capacity. In terms of monthly consumption at home, the acceptable Rooftop Solar Plant capacity for households is 0-150 units (1-2 kW), 150-300 units (2-3 kW), and >300 units (above 3 kW). Ground-mounted panels, particularly in arid regions, offer benefits such as scalability and optimal orientation but face challenges like land use conflicts and environmental impact, particularly in drought-prone regions. By evaluating the distinct advantages and challenges of each system, this study aims to provide insights into their optimal applications, guiding stakeholders in making informed decisions to enhance solar energy efficiency and sustainability within regulatory constraints. This research also explores the implications of regulations, such as Italy's ban on ground-mounted solar panels on productive agricultural land, on solar energy strategies.

Keywords: sustainability, solar energy, subsidy, rooftop solar energy, renewable energy

Procedia PDF Downloads 15
184 Comparative Quantitative Study on Learning Outcomes of Major Study Groups of an Information and Communication Technology Bachelor Educational Program

Authors: Kari Björn, Mikael Soini

Abstract:

Higher Education system reforms, especially Finnish system of Universities of Applied Sciences in 2014 are discussed. The new steering model is based on major legislative changes, output-oriented funding and open information. The governmental steering reform, especially the financial model and the resulting institutional level responses, such as a curriculum reforms are discussed, focusing especially in engineering programs. The paper is motivated by management need to establish objective steering-related performance indicators and to apply them consistently across all educational programs. The close relationship to governmental steering and funding model imply that internally derived indicators can be directly applied. Metropolia University of Applied Sciences (MUAS) as a case institution is briefly introduced, focusing on engineering education in Information and Communications Technology (ICT), and its related programs. The reform forced consolidation of previously separate smaller programs into fewer units of student application. New curriculum ICT students have a common first year before they apply for a Major. A framework of parallel and longitudinal comparisons is introduced and used across Majors in two campuses. The new externally introduced performance criteria are applied internally on ICT Majors using data ex-ante and ex-post of program merger.  A comparative performance of the Majors after completion of joint first year is established, focusing on previously omitted Majors for completeness of analysis. Some new research questions resulting from transfer of Majors between campuses and quota setting are discussed. Practical orientation identifies best practices to share or targets needing most attention for improvement. This level of analysis is directly applicable at student group and teaching team level, where corrective actions are possible, when identified. The analysis is quantitative and the nature of the corrective actions are not discussed. Causal relationships and factor analysis are omitted, because campuses, their staff and various pedagogical implementation details contain still too many undetermined factors for our limited data. Such qualitative analysis is left for further research. Further study must, however, be guided by the relevance of the observations.

Keywords: engineering education, integrated curriculum, learning outcomes, performance measurement

Procedia PDF Downloads 203
183 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame

Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin

Abstract:

The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.

Keywords: FACTS, multi-space-time frame, optimal control, TCSC

Procedia PDF Downloads 244