Search results for: optimized closed polygonal segment method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20762

Search results for: optimized closed polygonal segment method

16952 An Improved Modular Multilevel Converter Voltage Balancing Approach for Grid Connected PV System

Authors: Safia Bashir, Zulfiqar Memon

Abstract:

During the last decade, renewable energy sources in particular solar photovoltaic (PV) has gained increased attention. Therefore, various PV converters topologies have emerged. Among this topology, the modular multilevel converter (MMC) is considered as one of the most promising topologies for the grid-connected PV system due to its modularity and transformerless features. When it comes to the safe operation of MMC, the balancing of the Submodules Voltages (SMs) plays a critical role. This paper proposes a balancing approach based on space vector PWM (SVPWM). Unlike the existing techniques, this method generates the switching vectors for the MMC by using only one SVPWM for the upper arm. The lower arm switching vectors are obtained by finding the complement of the upper arm switching vectors. The use of one SVPWM not only simplifies the calculation but also helped in reducing the circulating current in the MMC. The proposed method is varied through simulation using Matlab/Simulink and compared with other available modulation methods. The results validate the ability of the suggested method in balancing the SMs capacitors voltages and reducing the circulating current which will help in reducing the power loss of the PV system.

Keywords: capacitor voltage balancing, circulating current, modular multilevel converter, PV system

Procedia PDF Downloads 152
16951 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation

Authors: Ekin Nurbaş

Abstract:

One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.

Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing

Procedia PDF Downloads 139
16950 Studies on Lucrative Design of a Waste Heat Recovery System for Air Conditioners

Authors: Ashwin Bala, K. Panthalaraja Kumaran, S. Prithviraj, R. Pradeep, J. Udhayakumar, S. Ajith

Abstract:

In this paper, studies have been carried out for an in-house design of a waste heat recovery system for effectively utilizing the domestic air conditioner heat energy for producing hot water. Theoretical studies have been carried to optimizing the flow rate for getting maximum output with a minimum size of the heater. Critical diameter, wall thickness, and total length of the water pipeline have been estimated from the conventional heat transfer model. Several combinations of pipeline shapes viz., spiral, coil, zigzag wound through the radiator has been attempted and accordingly shape has been optimized using heat transfer analyses. The initial condition is declared based on the water flow rate and temperature. Through the parametric analytical studies we have conjectured that water flow rate, temperature difference between incoming water and radiator skin temperature, pipe material, radiator material, geometry of the water pipe viz., length, diameter, and wall thickness are having bearing on the lucrative design of a waste heat recovery system for air conditioners. Results generated through the numerical studies have been validated using an in-house waste heat recovery system for air conditioners.

Keywords: air conditioner design, energy conversion system, radiator design for energy recovery systems, waste heat recovery system

Procedia PDF Downloads 352
16949 Wasting Human and Computer Resources

Authors: Mária Csernoch, Piroska Biró

Abstract:

The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem-solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text-based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text-based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.

Keywords: deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources

Procedia PDF Downloads 377
16948 Optimization and Coordination of Organic Product Supply Chains under Competition: An Analytical Modeling Perspective

Authors: Mohammadreza Nematollahi, Bahareh Mosadegh Sedghy, Alireza Tajbakhsh

Abstract:

The last two decades have witnessed substantial attention to organic and sustainable agricultural supply chains. Motivated by real-world practices, this paper aims to address two main challenges observed in organic product supply chains: decentralized decision-making process between farmers and their retailers, and competition between organic products and their conventional counterparts. To this aim, an agricultural supply chain consisting of two farmers, a conventional farmer and an organic farmer who offers an organic version of the same product, is considered. Both farmers distribute their products through a single retailer, where there exists competition between the organic and the conventional product. The retailer, as the market leader, sets the wholesale price, and afterward, the farmers set their production quantity decisions. This paper first models the demand functions of the conventional and organic products by incorporating the effect of asymmetric brand equity, which captures the fact that consumers usually pay a premium for organic due to positive perceptions regarding their health and environmental benefits. Then, profit functions with consideration of some characteristics of organic farming, including crop yield gap and organic cost factor, are modeled. Our research also considers both economies and diseconomies of scale in farming production as well as the effects of organic subsidy paid by the government to support organic farming. This paper explores the investigated supply chain in three scenarios: decentralized, centralized, and coordinated decision-making structures. In the decentralized scenario, the conventional and organic farmers and the retailer maximize their own profits individually. In this case, the interaction between the farmers is modeled under the Bertrand competition, while analyzing the interaction between the retailer and farmers under the Stackelberg game structure. In the centralized model, the optimal production strategies are obtained from the entire supply chain perspective. Analytical models are developed to derive closed-form optimal solutions. Moreover, analytical sensitivity analyses are conducted to explore the effects of main parameters like the crop yield gap, organic cost factor, organic subsidy, and percent price premium of the organic product on the farmers’ and retailer’s optimal strategies. Afterward, a coordination scenario is proposed to convince the three supply chain members to shift from the decentralized to centralized decision-making structure. The results indicate that the proposed coordination scenario provides a win-win-win situation for all three members compared to the decentralized model. Moreover, our paper demonstrates that the coordinated model respectively increases and decreases the production and price of organic produce, which in turn motivates the consumption of organic products in the market. Moreover, the proposed coordination model helps the organic farmer better handle the challenges of organic farming, including the additional cost and crop yield gap. Last but not least, our results highlight the active role of the organic subsidy paid by the government as a means of promoting sustainable organic product supply chains. Our paper shows that although the amount of organic subsidy plays a significant role in the production and sales price of organic products, the allocation method of subsidy between the organic farmer and retailer is not of that importance.

Keywords: analytical game-theoretic model, product competition, supply chain coordination, sustainable organic supply chain

Procedia PDF Downloads 106
16947 Temperature Investigations in Two Type of Crimped Connection Using Experimental Determinations

Authors: C. F. Ocoleanu, A. I. Dolan, G. Cividjian, S. Teodorescu

Abstract:

In this paper we make a temperature investigations in two type of superposed crimped connections using experimental determinations. All the samples use 8 copper wire 7.1 x 3 mm2 crimped by two methods: the first method uses one crimp indents and the second is a proposed method with two crimp indents. The ferrule is a parallel one. We study the influence of number and position of crimp indents. The samples are heated in A.C. current at different current values until steady state heating regime. After obtaining of temperature values, we compare them and present the conclusion.

Keywords: crimped connections, experimental determinations, temperature, heat transfer

Procedia PDF Downloads 262
16946 Simulation-Based Evaluation of Indoor Air Quality and Comfort Control in Non-Residential Buildings

Authors: Torsten Schwan, Rene Unger

Abstract:

Simulation of thermal and electrical building performance more and more becomes part of an integrative planning process. Increasing requirements on energy efficiency, the integration of volatile renewable energy, smart control and storage management often cause tremendous challenges for building engineers and architects. This mainly affects commercial or non-residential buildings. Their energy consumption characteristics significantly distinguish from residential ones. This work focuses on the many-objective optimization problem indoor air quality and comfort, especially in non-residential buildings. Based on a brief description of intermediate dependencies between different requirements on indoor air treatment it extends existing Modelica-based building physics models with additional system states to adequately represent indoor air conditions. Interfaces to corresponding HVAC (heating, ventilation, and air conditioning) system and control models enable closed-loop analyzes of occupants' requirements and energy efficiency as well as profitableness aspects. A complex application scenario of a nearly-zero-energy school building shows advantages of presented evaluation process for engineers and architects. This way, clear identification of air quality requirements in individual rooms together with realistic model-based description of occupants' behavior helps to optimize HVAC system already in early design stages. Building planning processes can be highly improved and accelerated by increasing integration of advanced simulation methods. Those methods mainly provide suitable answers on engineers' and architects' questions regarding more exuberant and complex variety of suitable energy supply solutions.

Keywords: indoor air quality, dynamic simulation, energy efficient control, non-residential buildings

Procedia PDF Downloads 224
16945 Cost Valuation Method for Development Concurrent, Phase Appropriate Requirement Valuation Using the Example of Load Carrier Development in the Lithium-Ion-Battery Production

Authors: Achim Kampker, Christoph Deutskens, Heiner Hans Heimes, Mathias Ordung, Felix Optehostert

Abstract:

In the past years electric mobility became part of a public discussion. The trend to fully electrified vehicles instead of vehicles fueled with fossil energy has notably gained momentum. Today nearly every big car manufacturer produces and sells fully electrified vehicles, but electrified vehicles are still not as competitive as conventional powered vehicles. As the traction battery states the largest cost driver, lowering its price is a crucial objective. In addition to improvements in product and production processes a non-negligible, but widely underestimated cost driver of production can be found in logistics, since the production technology is not continuous yet and neither are the logistics systems. This paper presents an approach to evaluate cost factors on different designs of load carrier systems. Due to numerous interdependencies, the combination of costs factors for a particular scenario is not transparent. This is effecting actions for cost reduction negatively, but still cost reduction is one of the major goals for simultaneous engineering processes. Therefore a concurrent and phase appropriate cost valuation method is necessary to serve cost transparency. In this paper the four phases of this cost valuation method are defined and explained, which based upon a new approach integrating the logistics development process in to the integrated product and process development.

Keywords: research and development, technology and innovation, lithium-ion-battery production, load carrier development process, cost valuation method

Procedia PDF Downloads 583
16944 Requirements Management in Agile

Authors: Ravneet Kaur

Abstract:

The concept of Agile Requirements Engineering and Management is not new. However, the struggle to figure out how traditional Requirements Management Process fits within an Agile framework remains complex. This paper talks about a process that can merge the organization’s traditional Requirements Management Process nicely into the Agile Software Development Process. This process provides Traceability of the Product Backlog to the external documents on one hand and User Stories on the other hand. It also gives sufficient evidence that the system will deliver the right functionality with good quality in the form of various statistics and reports. In the nutshell, by overlaying a process on top of Agile, without disturbing the Agility, we are able to get synergic benefits in terms of productivity, profitability, its reporting, and end to end visibility to all Stakeholders. The framework can be used for just-in-time requirements definition or to build a repository of requirements for future use. The goal is to make sure that the business (specifically, the product owner) can clearly articulate what needs to be built and define what is of high quality. To accomplish this, the requirements cycle follows a Scrum-like process that mirrors the development cycle but stays two to three steps ahead. The goal is to create a process by which requirements can be thoroughly vetted, organized, and communicated in a manner that is iterative, timely, and quality-focused. Agile is quickly becoming the most popular way of developing software because it fosters continuous improvement, time-boxed development cycles, and more quickly delivering value to the end users. That value will be driven to a large extent by the quality and clarity of requirements that feed the software development process. An agile, lean, and timely approach to requirements as the starting point will help to ensure that the process is optimized.

Keywords: requirements management, Agile

Procedia PDF Downloads 363
16943 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components

Authors: Najeh Lakhoua

Abstract:

Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.

Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture

Procedia PDF Downloads 193
16942 A Method To Assess Collaboration Using Perception of Risk from the Architectural Engineering Construction Industry

Authors: Sujesh F. Sujan, Steve W. Jones, Arto Kiviniemi

Abstract:

The use of Building Information Modelling (BIM) in the Architectural-Engineering-Construction (AEC) industry is a form of systemic innovation. Unlike incremental innovation, (such as the technological development of CAD from hand based drawings to 2D electronically printed drawings) any form of systemic innovation in Project-Based Inter-Organisational Networks requires complete collaboration and results in numerous benefits if adopted and utilised properly. Proper use of BIM involves people collaborating with the use of interoperable BIM compliant tools. The AEC industry globally has been known for its adversarial and fragmented nature where firms take advantage of one another to increase their own profitability. Due to the industry’s nature, getting people to collaborate by unifying their goals is critical to successful BIM adoption. However, this form of innovation is often being forced artificially in the old ways of working which do not suit collaboration. This may be one of the reasons for its low global use even though the technology was developed more than 20 years ago. Therefore, there is a need to develop a metric/method to support and allow industry players to gain confidence in their investment into BIM software and workflow methods. This paper departs from defining systemic risk as a risk that affects all the project participants at a given stage of a project and defines categories of systemic risks. The need to generalise is to allow method applicability to any industry where the category will be the same, but the example of the risk will depend on the industry the study is done in. The method proposed seeks to use individual perception of an example of systemic risk as a key parameter. The significance of this study lies in relating the variance of individual perception of systemic risk to how much the team is collaborating. The method bases its notions on the claim that a more unified range of individual perceptions would mean a higher probability that the team is collaborating better. Since contracts and procurement devise how a project team operates, the method could also break the methodological barrier of highly subjective findings that case studies inflict, which has limited the possibility of generalising between global industries. Since human nature applies in all industries, the authors’ intuition is that perception can be a valuable parameter to study collaboration which is essential especially in projects that utilise systemic innovation such as BIM.

Keywords: building information modelling, perception of risk, systemic innovation, team collaboration

Procedia PDF Downloads 178
16941 Significant Reduction in Specific CO₂ Emission through Process Optimization at G Blast Furnace, Tata Steel Jamshedpur

Authors: Shoumodip Roy, Ankit Singhania, M. K. G. Choudhury, Santanu Mallick, M. K. Agarwal, R. V. Ramna, Uttam Singh

Abstract:

One of the key corporate goals of Tata Steel company is to demonstrate Environment Leadership. Decreasing specific CO₂ emission is one of the key steps to achieve the stated corporate goal. At any Blast Furnace, specific CO₂ emission is directly proportional to fuel intake. To reduce the fuel intake at G Blast Furnace, an initial benchmarking exercise was carried out with international and domestic Blast Furnaces to determine the potential for improvement. The gap identified during the exercise revealed that the benchmark Blast Furnaces operated with superior raw material quality than that in G Blast Furnace. However, since the raw materials to G Blast Furnace are sourced from the captive mines, improvement in the raw material quality was out of scope. Therefore, trials were taken with different operating regimes, to identify the key process parameters, which on optimization could significantly reduce the fuel intake in G Blast Furnace. The key process parameters identified from the trial were the Stoichiometric Oxygen Ratio, Melting Capacity ratio and the burden distribution inside the furnace. These identified process parameters were optimized to bridge the gap in fuel intake at G Blast Furnace, thereby reducing specific CO₂ emission to benchmark levels. This paradigm shift enabled to lower the fuel intake by 70kg per ton of liquid iron produced, thereby reducing the specific CO₂ emission by 15 percent.

Keywords: benchmark, blast furnace, CO₂ emission, fuel rate

Procedia PDF Downloads 269
16940 Some Trends in Analysis of Two-Way Solid Slabs

Authors: Reem I. Al-Ya' Goub, Nasim Shatarat

Abstract:

This paper presents the results of analytical and comparative study among software programs' outputs in analysis of some two way solid slabs; flat plate, flat slab with beams and flat slab with drop panels problems that already been analyzed using Classical Equivalent Frame Method (CEFM) by several reinforced concrete book authors. The primary objective of this research is to determine the moment results using various software programs. Then, a summary of the results and differences percentages were obtained to show how analysis procedure effects the outputs of calculations that vary from software program to another when comparing them with the results of CEFM. Moment values were obtained using either the Equivalent Frame Method (EFM) or Finite Element Method (FEM) that's used among many software programs. The results of the analyses demonstrate that software programs vary markedly in terms of the information they provide to the structural designer regarding values of the model insertion, stiffness, effective moment of inertia used and specially the moment values.

Keywords: two-way solid slabs, flat plate, flat slab with beams, flat slab with drop panels, analysis, modeling, EFM, CEFM, FEM

Procedia PDF Downloads 405
16939 Steel Industry Waste as Recyclable Raw Material for the Development of Ferrous-Aluminum Alloys

Authors: Arnold S. Freitas Neto, Rodrigo E. Coelho, Erick S. Mendonça

Abstract:

The study aims to assess if high-purity iron powder in iron-aluminum alloys can be replaced by SAE 1020 steel chips with an atomicity proportion of 50% for each element. Chips of SAE 1020 are rejected in industrial processes. Thus, the use of SAE 1020 as a replaceable composite for iron increase the sustainability of ferrous alloys by recycling industrial waste. The alloys were processed by high energy milling, of which the main advantage is the minimal loss of raw material. The raw material for three of the six samples were high purity iron powder and recyclable aluminum cans. For the other three samples, the high purity iron powder has been replaced with chips of SAE 1020 steel. The process started with the separate milling of chips of aluminum and SAE 1020 steel to obtain the powder. Subsequently, the raw material was mixed in the pre-defined proportions, milled together for five hours and then underwent a closed-die hot compaction at the temperature of 500 °C. Thereafter, the compacted samples underwent heat treatments known as sintering and solubilization. All samples were sintered one hour, and 4 samples were solubilized for either 4 or 10 hours under well-controlled atmosphere conditions. Lastly, the composition and the mechanical properties of their hardness were analyzed. The samples were analyzed by optical microscopy, scanning electron microscopy and hardness testing. The results of the analysis showed a similar chemical composition and interesting hardness levels with low standard deviations. This verified that the use of SAE 1020 steel chips can be a low-cost alternative for high-purity iron powder and could possibly replace high-purity Iron in industrial applications.

Keywords: Fe-Al alloys, high energy milling, iron-aluminum alloys, metallography characterization, powder metallurgy, recycling ferrous alloy, SAE 1020 steel recycling

Procedia PDF Downloads 352
16938 Determination of Thermal Conductivity of Plaster Tow Material and Kapok Plaster by Numerical Method: Influence of the Heat Exchange Coefficient in Transitional Regime

Authors: Traore Papa Touty

Abstract:

This article presents a numerical method for determining the thermal conductivity of local materials, kapok plaster and tow plaster. It consists of heating the front face of a wall made from these two materials and at the same time insulating its rear face. We simultaneously study the curves of the evolution of the heat flux density as a function of time on the rear face and the evolution of the temperature gradient as a function of time between the heated face and the insulated face. Thermal conductivity is obtained when reaching a steady state when the evolution of the heat flux density and the temperature gradient no longer depend on time. The results showed that the theoretical value of thermal conductivity is obtained when the material has reached its equilibrium state. And the values obtained for different values of the convective exchange coefficients are appreciably equal to the experimental value.

Keywords: thermal conductivity, numerical method, heat exchange coefficient, transitional regime

Procedia PDF Downloads 210
16937 Physical Modeling of Woodwind Ancient Greek Musical Instruments: The Case of Plagiaulos

Authors: Dimitra Marini, Konstantinos Bakogiannis, Spyros Polychronopoulos, Georgios Kouroupetroglou

Abstract:

Archaemusicology cannot entirely depend on the study of the excavated ancient musical instruments as most of the time their condition is not ideal (i.e., missing/eroded parts) and moreover, because of the concern damaging the originals during the experiments. Researchers, in order to overcome the above obstacles, build replicas. This technique is still the most popular one, although it is rather expensive and time-consuming. Throughout the last decades, the development of physical modeling techniques has provided tools that enable the study of musical instruments through their digitally simulated models. This is not only a more cost and time-efficient technique but also provides additional flexibility as the user can easily modify parameters such as their geometrical features and materials. This paper thoroughly describes the steps to create a physical model of a woodwind ancient Greek instrument, Plagiaulos. This instrument could be considered as the ancestor of the modern flute due to the common geometry and air-jet excitation mechanism. Plagiaulos is comprised of a single resonator with an open end and a number of tone holes. The combination of closed and open tone holes produces the pitch variations. In this work, the effects of all the instrument’s components are described by means of physics and then simulated based on digital waveguides. The synthesized sound of the proposed model complies with the theory, highlighting its validity. Further, the synthesized sound of the model simulating the Plagiaulos of Koile (2nd century BCE) was compared with its replica build in our laboratory by following the scientific methodologies of archeomusicology. The aforementioned results verify that robust dynamic digital tools can be introduced in the field of computational, experimental archaemusicology.

Keywords: archaeomusicology, digital waveguides, musical acoustics, physical modeling

Procedia PDF Downloads 108
16936 A Key Parameter in Ocean Thermal Energy Conversion Plant Design and Operation

Authors: Yongjian Gu

Abstract:

Ocean thermal energy is one of the ocean energy sources. It is a renewable, sustainable, and green energy source. Ocean thermal energy conversion (OTEC) applies the ocean temperature gradient between the warmer surface seawater and the cooler deep seawater to run a heat engine and produce a useful power output. Unfortunately, the ocean temperature gradient is not big. Even in the tropical and equatorial regions, the surface water temperature can only reach up to 28oC and the deep water temperature can be as low as 4oC. The thermal efficiency of the OTEC plants, therefore, is low. In order to improve the plant thermal efficiency by using the limited ocean temperature gradient, some OTEC plants use the method of adding more equipment for better heat recovery, such as heat exchangers, pumps, etc. Obviously, the method will increase the plant's complexity and cost. The more important impact of the method is the additional equipment needs to consume power too, which may have an adverse effect on the plant net power output, in turn, the plant thermal efficiency. In the paper, the author first describes varied OTEC plants and the practice of using the method of adding more equipment for improving the plant's thermal efficiency. Then the author proposes a parameter, plant back works ratio ϕ, for measuring if the added equipment is appropriate for the plant thermal efficiency improvement. Finally, in the paper, the author presents examples to illustrate the application of the back work ratio ϕ as a key parameter in the OTEC plant design and operation.

Keywords: ocean thermal energy, ocean thermal energy conversion (OTEC), OTEC plant, plant back work ratio ϕ

Procedia PDF Downloads 188
16935 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.

Keywords: CNC machining, six sigma, surface roughness, Taguchi methodology

Procedia PDF Downloads 238
16934 An Accurate Computation of 2D Zernike Moments via Fast Fourier Transform

Authors: Mohammed S. Al-Rawi, J. Bastos, J. Rodriguez

Abstract:

Object detection and object recognition are essential components of every computer vision system. Despite the high computational complexity and other problems related to numerical stability and accuracy, Zernike moments of 2D images (ZMs) have shown resilience when used in object recognition and have been used in various image analysis applications. In this work, we propose a novel method for computing ZMs via Fast Fourier Transform (FFT). Notably, this is the first algorithm that can generate ZMs up to extremely high orders accurately, e.g., it can be used to generate ZMs for orders up to 1000 or even higher. Furthermore, the proposed method is also simpler and faster than the other methods due to the availability of FFT software and/or hardware. The accuracies and numerical stability of ZMs computed via FFT have been confirmed using the orthogonality property. We also introduce normalizing ZMs with Neumann factor when the image is embedded in a larger grid, and color image reconstruction based on RGB normalization of the reconstructed images. Astonishingly, higher-order image reconstruction experiments show that the proposed methods are superior, both quantitatively and subjectively, compared to the q-recursive method.

Keywords: Chebyshev polynomial, fourier transform, fast algorithms, image recognition, pseudo Zernike moments, Zernike moments

Procedia PDF Downloads 257
16933 Comparison of On-Site Stormwater Detention Real Performance and Theoretical Simulations

Authors: Pedro P. Drumond, Priscilla M. Moura, Marcia M. L. P. Coelho

Abstract:

The purpose of On-site Stormwater Detention (OSD) system is to promote the detention of addition stormwater runoff caused by impervious areas, in order to maintain the peak flow the same as the pre-urbanization condition. In recent decades, these systems have been built in many cities around the world. However, its real efficiency continues to be unknown due to the lack of research, especially with regard to monitoring its real performance. Thus, this study aims to compare the water level monitoring data of an OSD built in Belo Horizonte/Brazil with the results of theoretical methods simulations, usually adopted in OSD design. There were made two theoretical simulations, one using the Rational Method and Modified Puls method and another using the Soil Conservation Service (SCS) method and Modified Puls method. The monitoring data were obtained with a water level sensor, installed inside the reservoir and connected to a data logger. The comparison of OSD performance was made for 48 rainfall events recorded from April/2015 to March/2017. The comparison of maximum water levels in the OSD showed that the results of the simulations with Rational/Puls and SCS/Puls methods were, on average 33% and 73%, respectively, lower than those monitored. The Rational/Puls results were significantly higher than the SCS/Puls results, only in the events with greater frequency. In the events with average recurrence interval of 5, 10 and 200 years, the maximum water heights were similar in both simulations. Also, the results showed that the duration of rainfall events was close to the duration of monitored hydrograph. The rising time and recession time of the hydrographs calculated with the Rational Method represented better the monitored hydrograph than SCS Method. The comparison indicates that the real discharge coefficient value could be higher than 0.61, adopted in Puls simulations. New researches evaluating OSD real performance should be developed. In order to verify the peak flow damping efficiency and the value of the discharge coefficient is necessary to monitor the inflow and outflow of an OSD, in addition to monitor the water level inside it.

Keywords: best management practices, on-site stormwater detention, source control, urban drainage

Procedia PDF Downloads 182
16932 Applications of Artificial Intelligence (AI) in Cardiac imaging

Authors: Angelis P. Barlampas

Abstract:

The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.

Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine

Procedia PDF Downloads 71
16931 Energy Efficient Assessment of Energy Internet Based on Data-Driven Fuzzy Integrated Cloud Evaluation Algorithm

Authors: Chuanbo Xu, Xinying Li, Gejirifu De, Yunna Wu

Abstract:

Energy Internet (EI) is a new form that deeply integrates the Internet and the entire energy process from production to consumption. The assessment of energy efficient performance is of vital importance for the long-term sustainable development of EI project. Although the newly proposed fuzzy integrated cloud evaluation algorithm considers the randomness of uncertainty, it relies too much on the experience and knowledge of experts. Fortunately, the enrichment of EI data has enabled the utilization of data-driven methods. Therefore, the main purpose of this work is to assess the energy efficient of park-level EI by using a combination of a data-driven method with the fuzzy integrated cloud evaluation algorithm. Firstly, the indicators for the energy efficient are identified through literature review. Secondly, the artificial neural network (ANN)-based data-driven method is employed to cluster the values of indicators. Thirdly, the energy efficient of EI project is calculated through the fuzzy integrated cloud evaluation algorithm. Finally, the applicability of the proposed method is demonstrated by a case study.

Keywords: energy efficient, energy internet, data-driven, fuzzy integrated evaluation, cloud model

Procedia PDF Downloads 194
16930 Prediction of Finned Projectile Aerodynamics Using a Lattice-Boltzmann Method CFD Solution

Authors: Zaki Abiza, Miguel Chavez, David M. Holman, Ruddy Brionnaud

Abstract:

In this paper, the prediction of the aerodynamic behavior of the flow around a Finned Projectile will be validated using a Computational Fluid Dynamics (CFD) solution, XFlow, based on the Lattice-Boltzmann Method (LBM). XFlow is an innovative CFD software developed by Next Limit Dynamics. It is based on a state-of-the-art Lattice-Boltzmann Method which uses a proprietary particle-based kinetic solver and a LES turbulent model coupled with the generalized law of the wall (WMLES). The Lattice-Boltzmann method discretizes the continuous Boltzmann equation, a transport equation for the particle probability distribution function. From the Boltzmann transport equation, and by means of the Chapman-Enskog expansion, the compressible Navier-Stokes equations can be recovered. However to simulate compressible flows, this method has a Mach number limitation because of the lattice discretization. Thanks to this flexible particle-based approach the traditional meshing process is avoided, the discretization stage is strongly accelerated reducing engineering costs, and computations on complex geometries are affordable in a straightforward way. The projectile that will be used in this work is the Army-Navy Basic Finned Missile (ANF) with a caliber of 0.03 m. The analysis will consist in varying the Mach number from M=0.5 comparing the axial force coefficient, normal force slope coefficient and the pitch moment slope coefficient of the Finned Projectile obtained by XFlow with the experimental data. The slope coefficients will be obtained using finite difference techniques in the linear range of the polar curve. The aim of such an analysis is to find out the limiting Mach number value starting from which the effects of high fluid compressibility (related to transonic flow regime) lead the XFlow simulations to differ from the experimental results. This will allow identifying the critical Mach number which limits the validity of the isothermal formulation of XFlow and beyond which a fully compressible solver implementing a coupled momentum-energy equations would be required.

Keywords: CFD, computational fluid dynamics, drag, finned projectile, lattice-boltzmann method, LBM, lift, mach, pitch

Procedia PDF Downloads 412
16929 Capability Prediction of Machining Processes Based on Uncertainty Analysis

Authors: Hamed Afrasiab, Saeed Khodaygan

Abstract:

Prediction of machining process capability in the design stage plays a key role to reach the precision design and manufacturing of mechanical products. Inaccuracies in machining process lead to errors in position and orientation of machined features on the part, and strongly affect the process capability in the final quality of the product. In this paper, an efficient systematic approach is given to investigate the machining errors to predict the manufacturing errors of the parts and capability prediction of corresponding machining processes. A mathematical formulation of fixture locators modeling is presented to establish the relationship between the part errors and the related sources. Based on this method, the final machining errors of the part can be accurately estimated by relating them to the combined dimensional and geometric tolerances of the workpiece – fixture system. This method is developed for uncertainty analysis based on the Worst Case and statistical approaches. The application of the presented method is illustrated through presenting an example and the computational results are compared with the Monte Carlo simulation results.

Keywords: process capability, machining error, dimensional and geometrical tolerances, uncertainty analysis

Procedia PDF Downloads 302
16928 Physico-Chemical Characteristics of Terminalia arjuna Encapsulated Dairy Drink

Authors: Sawale Pravin Digambar, G. R. Patil, Shaik Abdul Hussain

Abstract:

Terminalia arjuna (TA), an important medicinal plant in Indian System of Medicine, is specifically recognized for its recuperative effect on heart ailments. Alcoholic extract of TA (both free and encapsulated) was incorporated into milk to obtain functional dairy beverages. The respective beverages were appropriately flavored and optimized using response surface methodology to improve the sensory appeal. The beverages were evaluated for their compositional, anti-oxidative and various other physico-chemical aspects. Addition of herb (0.3%) extract to flavoured dairy drink (Drink 1) resulted in significantly lowered (p>0.05) HMF content and increased antioxidant activity, total phenol content as compared with control (Control 1). Subsequently, a significant (p>0.05) increase in acidity and sedimentation was also observed. Encapsulated herb (1.8%) incorporated drink (Drink 2) had significantly (P>0.05) enhanced HMF value and decreased antioxidant activity, phenol content as compared to herb added vanilla chocolate dairy drink (Drink 1). It can be concluded that addition of encapsulated TA extract and non-encapsulated TA extract to chocolate dairy drink at 0.3% concentration altered the functional properties vanilla chocolate dairy drink which could be related to the interaction of herb components such as polyphenol with milk protein or maltodextrin/ gum Arabic matrix.

Keywords: Terminalia arjuna, encapsulate, antioxidant activity, physicochemical study

Procedia PDF Downloads 360
16927 Hot Deformability of Si-Steel Strips Containing Al

Authors: Mohamed Yousef, Magdy Samuel, Maha El-Meligy, Taher El-Bitar

Abstract:

The present work is dealing with 2% Si-steel alloy. The alloy contains 0.05% C as well as 0.85% Al. The alloy under investigation would be used for electrical transformation purposes. A heating (expansion) - cooling (contraction) dilation investigation was executed to detect the a, a+g, and g transformation temperatures at the inflection points of the dilation curve. On heating, primary a  was detected at a temperature range between room temperature and 687 oC. The domain of a+g was detected in the range between 687 oC and 746 oC. g phase exists in the closed g region at the range between 746 oC and 1043 oC. The domain of a phase appears again at a temperature range between 1043 and 1105 oC, and followed by secondary a at temperature higher than 1105 oC. A physical simulation of thermo-mechanical processing on the as-cast alloy was carried out. The simulation process took into consideration the hot flat rolling pilot plant parameters. The process was executed on the thermo-mechanical simulator (Gleeble 3500). The process was designed to include seven consecutive passes. The 1st pass represents the roughing stage, while the remaining six passes represent finish rolling stage. The whole process was executed at the temperature range from 1100 oC to 900 oC. The amount of strain starts with 23.5% at the roughing pass and decreases continuously to reach 7.5 % at the last finishing pass. The flow curve of the alloy can be abstracted from the stress-strain curves representing simulated passes. It shows alloy hardening from a pass to the other up to pass no. 6, as a result of decreasing the deformation temperature and increasing of cumulative strain. After pass no. 6, the deformation process enhances the dynamic recrystallization phenomena to appear, where the z-parameter would be high.

Keywords: si- steel, hot deformability, critical transformation temperature, physical simulation, thermo-mechanical processing, flow curve, dynamic softening.

Procedia PDF Downloads 236
16926 Determining Abnomal Behaviors in UAV Robots for Trajectory Control in Teleoperation

Authors: Kiwon Yeom

Abstract:

Change points are abrupt variations in a data sequence. Detection of change points is useful in modeling, analyzing, and predicting time series in application areas such as robotics and teleoperation. In this paper, a change point is defined to be a discontinuity in one of its derivatives. This paper presents a reliable method for detecting discontinuities within a three-dimensional trajectory data. The problem of determining one or more discontinuities is considered in regular and irregular trajectory data from teleoperation. We examine the geometric detection algorithm and illustrate the use of the method on real data examples.

Keywords: change point, discontinuity, teleoperation, abrupt variation

Procedia PDF Downloads 161
16925 A Method for the Extraction of the Character's Tendency from Korean Novels

Authors: Min-Ha Hong, Kee-Won Kim, Seung-Hoon Kim

Abstract:

The character in the story-based content, such as novels and movies, is one of the core elements to understand the story. In particular, the character’s tendency is an important factor to analyze the story-based content, because it has a significant influence on the storyline. If readers have the knowledge of the tendency of characters before reading a novel, it will be helpful to understand the structure of conflict, episode and relationship between characters in the novel. It may therefore help readers to select novel that the reader wants to read. In this paper, we propose a method of extracting the tendency of the characters from a novel written in Korean. In advance, we build the dictionary with pairs of the emotional words in Korean and English since the emotion words in the novel’s sentences express character’s feelings. We rate the degree of polarity (positive or negative) of words in our emotional words dictionary based on SenticNet. Then we extract characters and emotion words from sentences in a novel. Since the polarity of a word grows strong or weak due to sentence features such as quotations and modifiers, our proposed method consider them to calculate the polarity of characters. The information of the extracted character’s polarity can be used in the book search service or book recommendation service.

Keywords: character tendency, data mining, emotion word, Korean novel

Procedia PDF Downloads 333
16924 Influence of Parameters of Modeling and Data Distribution for Optimal Condition on Locally Weighted Projection Regression Method

Authors: Farhad Asadi, Mohammad Javad Mollakazemi, Aref Ghafouri

Abstract:

Recent research in neural networks science and neuroscience for modeling complex time series data and statistical learning has focused mostly on learning from high input space and signals. Local linear models are a strong choice for modeling local nonlinearity in data series. Locally weighted projection regression is a flexible and powerful algorithm for nonlinear approximation in high dimensional signal spaces. In this paper, different learning scenario of one and two dimensional data series with different distributions are investigated for simulation and further noise is inputted to data distribution for making different disordered distribution in time series data and for evaluation of algorithm in locality prediction of nonlinearity. Then, the performance of this algorithm is simulated and also when the distribution of data is high or when the number of data is less the sensitivity of this approach to data distribution and influence of important parameter of local validity in this algorithm with different data distribution is explained.

Keywords: local nonlinear estimation, LWPR algorithm, online training method, locally weighted projection regression method

Procedia PDF Downloads 493
16923 The Applications of Wire Print in Composite Material Research and Fabrication Process

Authors: Hsu Yi-Chia, Hoy June-Hao

Abstract:

FDM (Fused Deposition Modeling) is a rapid proofing method without mold, however, high material and time costs have always been a major disadvantage. Wire-printing is the next generation technology that can more flexible, and also easier to apply on a 3D printer and robotic arms printing. It can create its own construction methods. The research is mainly divided into three parts. The first is about the method of parameterizing the generated paths and the conversion of g-code to the wire-printing. The second is about material attempts and the application of effects. Third, is about the improvement of the operation of mechanical equipment and the design of robotic tool-head. The purpose of this study is to develop a new wire-print method that can efficiently generate line segments and paths in three- dimensions space. The parametric modeling software transforms the digital model into a 3D printer or robotic arms g-code, this article uses thermoplastics/ clay/composites materials for testing. The combination of materials and wire-print process makes architects and designers have the ability to research and develop works and construction in the future.

Keywords: parametric software, wire print, robotic arms fabrication, composite filament additive manufacturing

Procedia PDF Downloads 126