Search results for: linear parameter varying systems
11045 Analysis of Adaptive Facade Systems and Evaluation of Their Applicability in Turkey
Authors: Selin Öztürk Demirkiran
Abstract:
Approaches towards sustainability and energy efficiency are significant topics of our era. These approaches need to be addressed across various fields and are relevant to multiple disciplines. Building facades, as the first surface encountering external weather conditions, should be considered and analyzed within this context. Current seasonal changes due to global warming and the influence on climates have highlighted the necessity for building systems to adapt to these changes, emphasizing the need for long-lasting solutions. Therefore, this study aims to examine adaptive system applications using examples from similar climatic regions and buildings of different functions, classifying them according to adaptive system criteria. It also aims to explore and evaluate the current stage of such systems in Turkey and the potential for their implementation. In this study, six building examples with different functions, including two examples for each adaptive type, were analyzed from regions with climates similar to those in Turkey, with detailed examination sheets prepared. The purpose of this study is to contribute to ongoing developments by presenting findings on current concepts and analyses and proposing a distinct approach for the characterization of these elements at the scale of Turkey. From this perspective, there is a considerable amount of literature on adaptive facade designs, and while application examples exist, adaptive approaches have been developed and partially implemented. It is expected that innovative solutions in this field will find a place in Turkey in the near future, following the increasing number of examples globally.Keywords: adaptive facade, smart building facades, facade innovation, sustainability.
Procedia PDF Downloads 2811044 Role of Cryptocurrency in Portfolio Diversification
Authors: Onur Arugaslan, Ajay Samant, Devrim Yaman
Abstract:
Financial advisors and investors seek new assets which could potentially increase portfolio returns and decrease portfolio risk. Cryptocurrencies represent a relatively new asset class which could serve in both these roles. There has been very little research done in the area of the risk/return tradeoff in a portfolio consisting of fixed income assets, stocks, and cryptocurrency. The objective of this study is a rigorous examination of this issue. The data used in the study are the monthly returns on 4-week US Treasury Bills, S&P Investment Grade Corporate Bond Index, Bitcoin and the S&P 500 Stock Index. The methodology used in the study is the application Modern Portfolio Theory to evaluate the risk-adjusted returns of portfolios with varying combinations of these assets, using Sharpe, Treynor and Jensen Indexes, as well as the Sortino and Modigliani measures. The results of the study would include the ranking of various investment portfolios based on their risk/return characteristics. The conclusions of the study would include objective empirical inference for investors who are interested in including cryptocurrency in their asset portfolios but are unsure of the risk/return implications.Keywords: financial economics, portfolio diversification, fixed income securities, cryptocurrency, stock indexes
Procedia PDF Downloads 7711043 Optimization of Maintenance of PV Module Arrays Based on Asset Management Strategies: Case of Study
Authors: L. Alejandro Cárdenas, Fernando Herrera, David Nova, Juan Ballesteros
Abstract:
This paper presents a methodology to optimize the maintenance of grid-connected photovoltaic systems, considering the cleaning and module replacement periods based on an asset management strategy. The methodology is based on the analysis of the energy production of the PV plant, the energy feed-in tariff, and the cost of cleaning and replacement of the PV modules, with the overall revenue received being the optimization variable. The methodology is evaluated as a case study of a 5.6 kWp solar PV plant located on the Bogotá campus of the Universidad Nacional de Colombia. The asset management strategy implemented consists of assessing the PV modules through visual inspection, energy performance analysis, pollution, and degradation. Within the visual inspection of the plant, the general condition of the modules and the structure is assessed, identifying dust deposition, visible fractures, and water accumulation on the bottom. The energy performance analysis is performed with the energy production reported by the monitoring systems and compared with the values estimated in the simulation. The pollution analysis is performed using the soiling rate due to dust accumulation, which can be modelled by a black box with an exponential function dependent on historical pollution values. The pollution rate is calculated with data collected from the energy generated during two years in a photovoltaic plant on the campus of the National University of Colombia. Additionally, the alternative of assessing the temperature degradation of the PV modules is evaluated by estimating the cell temperature with parameters such as ambient temperature and wind speed. The medium-term energy decrease of the PV modules is assessed with the asset management strategy by calculating the health index to determine the replacement period of the modules due to degradation. This study proposes a tool for decision making related to the maintenance of photovoltaic systems. The above, projecting the increase in the installation of solar photovoltaic systems in power systems associated with the commitments made in the Paris Agreement for the reduction of CO2 emissions. In the Colombian context, it is estimated that by 2030, 12% of the installed power capacity will be solar PV.Keywords: asset management, PV module, optimization, maintenance
Procedia PDF Downloads 5611042 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches
Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.
Abstract:
A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency
Procedia PDF Downloads 15111041 Case Studies of Mitigation Methods against the Impacts of High Water Levels in the Great Lakes
Authors: Jennifer M. Penton
Abstract:
Record high lake levels in 2017 and 2019 (2017 max lake level = 75.81 m; 2018 max lake level = 75.26 m; 2019 max lake level = 75.92 m) combined with a number of severe storms in the Great Lakes region, have resulted in significant wave generation across Lake Ontario. The resulting large wave heights have led to erosion of the natural shoreline, overtopping of existing revetments, backshore erosion, and partial and complete failure of several coastal structures, which in turn have led to further erosion of the shoreline and damaged existing infrastructure. Such impacts can be seen all along the coast of Lake Ontario. Three specific locations have been chosen as case studies for this paper, each addressing erosion and/or flood mitigation methods, such as revetments and sheet piling with increased land levels. Varying site conditions and the resulting shoreline damage are compared herein. The results are reflected in the case-specific design components of the mitigation and adaptation methods and are presented in this paper.Keywords: erosion mitigation, flood mitigation, great lakes, high water levels
Procedia PDF Downloads 17711040 Estimation of Population Mean under Random Non-Response in Two-Phase Successive Sampling
Authors: M. Khalid, G. N. Singh
Abstract:
In this paper, we have considered the problem of estimation for population mean, on current (second) occasion in the presence of random non response in two-occasion successive sampling under two phase set-up. Modified exponential type estimators have been proposed, and their properties are studied under the assumptions that numbers of sampling units follow a distribution due to random non response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.Keywords: successive sampling, random non-response, auxiliary variable, bias, mean square error
Procedia PDF Downloads 52311039 Numerical Investigations on the Coanda Effect
Authors: Florin Frunzulica, Alexandru Dumitrache, Octavian Preotu
Abstract:
The Coanda effect consists of the tendency of a jet to remain attached to a sufficiently long/large convex surface. Flows deflected by a curved surface have caused great interest during last fifty years a major interest in the study of this phenomenon is caused by the possibility of using this effect to aircraft with short take-off and landing, for thrust vectoring. It is also used in applications involving mixing two of more fluids, noise attenuation, ventilation, etc. The paper proposes the numerical study of an aerodynamic configuration that can passively amplify the Coanda effect. On a wing flaps with predetermined configuration, a channel is applied between two particular zones, a low-pressure one and a high-pressure another one, respectively. The secondary flow through this channel yields a gap between the jet and the convex surface, maintaining the jet attached on a longer distance. The section altering-based active control of the secondary flow through the channel controls the attachment of the jet to the surface and automatically controls the deviation angle of the jet. The numerical simulations have been performed in Ansys Fluent for a series of wing flaps-channel configurations with varying jet velocity. The numerical results are in good agreement with experimental results.Keywords: blowing jet, CFD, Coanda effect, circulation control
Procedia PDF Downloads 34711038 Construction and Analysis of Samurai Sudoku
Authors: A. Danbaba
Abstract:
Samurai Sudoku consists of five Sudoku square designs each having nine treatments in each row (column or sub-block) only once such the five Sudoku designs overlaps. Two or more Samurai designs can be joint together to give an extended Samurai design. In addition, two Samurai designs, each containing five Sudoku square designs, are mutually orthogonal (Graeco). If we superimpose two Samurai designs and obtained a pair of Latin and Greek letters in each row (column or sub-block) of the five Sudoku designs only once, then we have Graeco Samurai design. In this paper, simple method of constructing Samurai designs and mutually orthogonal Samurai design are proposed. In addition, linear models and methods of data analysis for the designs are proposed.Keywords: samurai design, graeco samurai design, sudoku design, row or column swap
Procedia PDF Downloads 27011037 Comparison of Steel and Composite Analysis of a Multi-Storey Building
Authors: Çiğdem Avcı Karataş
Abstract:
Mitigation of structural damage caused by earthquake and reduction of fatality is one of the main concerns of engineers in seismic prone zones of the world. To achieve this aim many technologies have been developed in the last decades and applied in construction and retrofit of structures. On the one hand Turkey is well-known a country of high level of seismicity; on the other hand steel-composite structures appear competitive today in this country by comparison with other types of structures, for example only-steel or concrete structures. Composite construction is the dominant form of construction for the multi-storey building sector. The reason why composite construction is often so good can be expressed in one simple way - concrete is good in compression and steel is good in tension. By joining the two materials together structurally these strengths can be exploited to result in a highly efficient design. The reduced self-weight of composite elements has a knock-on effect by reducing the forces in those elements supporting them, including the foundations. The floor depth reductions that can be achieved using composite construction can also provide significant benefits in terms of the costs of services and the building envelope. The scope of this paper covers analysis, materials take-off, cost analysis and economic comparisons of a multi-storey building with composite and steel frames. The aim of this work is to show that designing load carrying systems as composite is more economical than designing as steel. Design of the nine stories building which is under consideration is done according to the regulation of the 2007, Turkish Earthquake Code and by using static and dynamic analysis methods. For the analyses of the steel and composite systems, plastic analysis methods have been used and whereas steel system analyses have been checked in compliance with EC3 and composite system analyses have been checked in compliance with EC4. At the end of the comparisons, it is revealed that composite load carrying systems analysis is more economical than the steel load carrying systems analysis considering the materials to be used in the load carrying system and the workmanship to be spent for this job.Keywords: composite analysis, earthquake, steel, multi-storey building
Procedia PDF Downloads 57211036 An Investigation on the Internal Quality Assurance System of Higher Education in Indonesia
Authors: Andi Mursidi
Abstract:
This study aims to investigate why the internal quality assurance system as the basis for the assessment of external quality assurance systems is not well developed at universities in Indonesia. To answer this problem, technical analysis used single instrumental case study with the respondents from ten universities. The findings of this study are the internal quality assurance system that is applied so far (1) only to gain accreditation; and (2) considered as a liability rather than as a necessity to meet the demands of quality standards. It needs strong commitment from internal stakeholders at the college/university to establish internal quality assurance systems that exceed the national standards of higher education. A high quality college/ university will have a good accreditation rank.Keywords: internal stakeholders, internal quality assurance system, commitment, higher education
Procedia PDF Downloads 29211035 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems
Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu
Abstract:
In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP
Procedia PDF Downloads 4611034 Comparing the Motion of Solar System with Water Droplet Motion to Predict the Future of Solar System
Authors: Areena Bhatti
Abstract:
The geometric arrangement of planet and moon is the result of a self-organizing system. In our solar system, the planets and moons are constantly orbiting around the sun. The aim of this theory is to compare the motion of a solar system with the motion of water droplet when poured into a water body. The basic methodology is to compare both motions to know how they are related to each other. The difference between both systems will be that one is extremely fast, and the other is extremely slow. The role of this theory is that by looking at the fast system we can conclude how slow the system will get to an end. Just like ripples are formed around water droplet that move away from the droplet and water droplet forming those ripples become small in size will tell us how solar system will behave in the same way. So it is concluded that large and small systems can work under the same process but with different motions of time, and motion of the solar system is the slowest form of water droplet motion.Keywords: motion, water, sun, time
Procedia PDF Downloads 15811033 An Approach to Analyze Testing of Nano On-Chip Networks
Authors: Farnaz Fotovvatikhah, Javad Akbari
Abstract:
Test time of a test architecture is an important factor which depends on the architecture's delay and test patterns. Here a new architecture to store the test results based on network on chip is presented. In addition, simple analytical model is proposed to calculate link test time for built in self-tester (BIST) and external tester (Ext) in multiprocessor systems. The results extracted from the model are verified using FPGA implementation and experimental measurements. Systems consisting 16, 25, and 36 processors are implemented and simulated and test time is calculated. In addition, BIST and Ext are compared in terms of test time at different conditions such as at different number of test patterns and nodes. Using the model the maximum frequency of testing could be calculated and the test structure could be optimized for high speed testing.Keywords: test, nano on-chip network, JTAG, modelling
Procedia PDF Downloads 49211032 Examining Customer Acceptance of Chatbots in B2B Customer Service: A Factorial Survey
Authors: Kathrin Endres, Daniela Greven
Abstract:
Although chatbots are a widely known and established communication instrument in B2C customer services, B2B industries still hesitate to implement chatbots due to the incertitude of customer acceptance. While many studies examine the chatbot acceptance of B2C consumers, few studies are focusing on the B2B sector, where the customer is represented by a buying center consisting of several stakeholders. This study investigates the challenges of chatbot acceptance in B2B industries compared to challenges of chatbot acceptance from current B2C literature by interviewing experts from German chatbot vendors. The results show many similarities between the customer requirements of B2B customers and B2C consumers. Still, due to several stakeholders involved in the buying center, the features of the chatbot users are more diverse but obfuscated at the same time. Using a factorial survey, this study further examines the customer acceptance of varying situations of B2B chatbot designs based on the chatbot variables transparency, fault tolerance, complexity of products, value of products, as well as transfer to live chat service employees. The findings show that all variables influence the propensity to use the chatbot. The results contribute to a better understanding of how firms in B2B industries can design chatbots to advance their customer service and enhance customer satisfaction.Keywords: chatbots, technology acceptance, B2B customer service, customer satisfaction
Procedia PDF Downloads 12711031 Modeling a Closed Loop Supply Chain with Continuous Price Decrease and Dynamic Deterministic Demand
Authors: H. R. Kamali, A. Sadegheih, M. A. Vahdat-Zad, H. Khademi-Zare
Abstract:
In this paper, a single product, multi-echelon, multi-period closed loop supply chain is surveyed, including a variety of costs, time conditions, and capacities, to plan and determine the values and time of the components procurement, production, distribution, recycling and disposal specially for high-tech products that undergo a decreasing production cost and sale price over time. For this purpose, the mathematic model of the problem that is a kind of mixed integer linear programming is presented, and it is finally proved that the problem belongs to the category of NP-hard problems.Keywords: closed loop supply chain, continuous price decrease, NP-hard, planning
Procedia PDF Downloads 36711030 The Food Industry in Nigeria: Development and Quality Assurance
Authors: Agi Sunday, Agih Ukuru Agih
Abstract:
In Nigeria, the food processing sector is dominated by small and medium enterprises, as well as multinational food companies. Quality standards are usually related to improving the safety of food products suitable for consumption in accordance to specifications by food regulatory bodies. These standards are essential elements for local and international businesses which contribute to economic progress through industrial development and trade. This review takes a critical look on the Nigerian food industry development in terms of quality standards that are necessary to be given consideration in the production of food and also ways of improving food production in Nigeria through the use of Total Quality Management (TQM) technique and the use of computerized systems to produce high quality and high value products while at the same time reducing production time and cost.Keywords: food industry, quality assurance, Nigeria, TQM, computerized systems
Procedia PDF Downloads 46011029 Philippine Site Suitability Analysis for Biomass, Hydro, Solar, and Wind Renewable Energy Development Using Geographic Information System Tools
Authors: Jara Kaye S. Villanueva, M. Rosario Concepcion O. Ang
Abstract:
For the past few years, Philippines has depended most of its energy source on oil, coal, and fossil fuel. According to the Department of Energy (DOE), the dominance of coal in the energy mix will continue until the year 2020. The expanding energy needs in the country have led to increasing efforts to promote and develop renewable energy. This research is a part of the government initiative in preparation for renewable energy development and expansion in the country. The Philippine Renewable Energy Resource Mapping from Light Detection and Ranging (LiDAR) Surveys is a three-year government project which aims to assess and quantify the renewable energy potential of the country and to put them into usable maps. This study focuses on the site suitability analysis of the four renewable energy sources – biomass (coconut, corn, rice, and sugarcane), hydro, solar, and wind energy. The site assessment is a key component in determining and assessing the most suitable locations for the construction of renewable energy power plants. This method maximizes the use of both the technical methods in resource assessment, as well as taking into account the environmental, social, and accessibility aspect in identifying potential sites by utilizing and integrating two different methods: the Multi-Criteria Decision Analysis (MCDA) method and Geographic Information System (GIS) tools. For the MCDA, Analytical Hierarchy Processing (AHP) is employed to determine the parameters needed for the suitability analysis. To structure these site suitability parameters, various experts from different fields were consulted – scientists, policy makers, environmentalists, and industrialists. The need to have a well-represented group of people to consult with is relevant to avoid bias in the output parameter of hierarchy levels and weight matrices. AHP pairwise matrix computation is utilized to derive weights per level out of the expert’s gathered feedback. Whereas from the threshold values derived from related literature, international studies, and government laws, the output values were then consulted with energy specialists from the DOE. Geospatial analysis using GIS tools translate this decision support outputs into visual maps. Particularly, this study uses Euclidean distance to compute for the distance values of each parameter, Fuzzy Membership algorithm which normalizes the output from the Euclidean Distance, and the Weighted Overlay tool for the aggregation of the layers. Using the Natural Breaks algorithm, the suitability ratings of each of the map are classified into 5 discrete categories of suitability index: (1) not suitable (2) least suitable, (3) suitable, (4) moderately suitable, and (5) highly suitable. In this method, the classes are grouped based on the best groups similar values wherein each subdivision are set from the rest based on the big difference in boundary values. Results show that in the entire Philippine area of responsibility, biomass has the highest suitability rating with rice as the most suitable at 75.76% suitability percentage, whereas wind has the least suitability percentage with score 10.28%. Solar and Hydro fall in the middle of the two, with suitability values 28.77% and 21.27%.Keywords: site suitability, biomass energy, hydro energy, solar energy, wind energy, GIS
Procedia PDF Downloads 15311028 Comprehensive Analysis of Power Allocation Algorithms for OFDM Based Communication Systems
Authors: Rakesh Dubey, Vaishali Bahl, Dalveer Kaur
Abstract:
The spiralling urge for high rate data transmission over wireless mediums needs intelligent use of electromagnetic resources considering restrictions like power ingestion, spectrum competence, robustness against multipath propagation and implementation intricacy. Orthogonal frequency division multiplexing (OFDM) is a capable technique for next generation wireless communication systems. For such high rate data transfers there is requirement of proper allocation of resources like power and capacity amongst the sub channels. This paper illustrates various available methods of allocating power and the capacity requirement with the constraint of Shannon limit.Keywords: Additive White Gaussian Noise, Multi-Carrier Modulation, Orthogonal Frequency Division Multiplexing (OFDM), Signal to Noise Ratio (SNR), Water Filling
Procedia PDF Downloads 55711027 Learning to Recommend with Negative Ratings Based on Factorization Machine
Authors: Caihong Sun, Xizi Zhang
Abstract:
Rating prediction is an important problem for recommender systems. The task is to predict the rating for an item that a user would give. Most of the existing algorithms for the task ignore the effect of negative ratings rated by users on items, but the negative ratings have a significant impact on users’ purchasing decisions in practice. In this paper, we present a rating prediction algorithm based on factorization machines that consider the effect of negative ratings inspired by Loss Aversion theory. The aim of this paper is to develop a concave and a convex negative disgust function to evaluate the negative ratings respectively. Experiments are conducted on MovieLens dataset. The experimental results demonstrate the effectiveness of the proposed methods by comparing with other four the state-of-the-art approaches. The negative ratings showed much importance in the accuracy of ratings predictions.Keywords: factorization machines, feature engineering, negative ratings, recommendation systems
Procedia PDF Downloads 24511026 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 6211025 A Review of Sustainable Energy-Saving Solutions in Active and Passive Solar Systems of Zero Energy Buildings Based on the Internet of Things
Authors: Hanieh Sadat Jannesari, Hoori Jannesar, Alireza Hajian HosseinAbadi
Abstract:
In general, buildings are responsible for a considerable share of consumed energy and carbon emissions worldwide and play a significant role in formulating sustainable development strategies. Therefore, a lot of effort is put into the design and construction of zero-energy buildings (ZEBs) to help eliminate the problems associated with the reduction of energy resources and environmental degradation. Two strategies are significant in designing ZEBs: minimizing the need for energy utilization in buildings (particularly for cooling and heating) through highly energy-efficient designs and using renewable energies and other technologies to meet the remaining energy needs. This paper reviews the works related to these two strategies concerning sustainable energy-saving solutions using renewable energy technologies and the Internet of Things in ZEBs. Drawing on the theories and recently implemented projects of energy engineers in ZEBs, we have reported the required technologies within the framework of this paper’s objectives. Overall, solutions based on renewable and sustainable technologies such as photovoltaic (PV) modules, thermal collectors, Phase Change Material (PCM) techniques, etc., are used in active and passive systems designed for various applications in such buildings as cooling, heating, lighting, cooking, etc. The results obtained from examining these projects show that it is possible to minimize the amount of energy required to be produced for and consumed by these buildings.Keywords: active and passive renewable energy systems, internet of things, storage, zero energy buildings
Procedia PDF Downloads 3811024 Thermal Hydraulic Analysis of the IAEA 10MW Benchmark Reactor under Normal Operating Condition
Authors: Hamed Djalal
Abstract:
The aim of this paper is to perform a thermal-hydraulic analysis of the IAEA 10 MW benchmark reactor solving analytically and numerically, by mean of the finite volume method, respectively the steady state and transient forced convection in rectangular narrow channel between two parallel MTR-type fuel plates, imposed under a cosine shape heat flux. A comparison between both solutions is presented to determine the minimal coolant velocity which can ensure a safe reactor core cooling, where the cladding temperature should not reach a specific safety limit 90 °C. For this purpose, a computer program is developed to determine the principal parameter related to the nuclear core safety, such as the temperature distribution in the fuel plate and in the coolant (light water) as a function of the inlet coolant velocity. Finally, a good agreement is noticed between the both analytical and numerical solutions, where the obtained results are displayed graphically.Keywords: forced convection, pressure drop, thermal hydraulic analysis, vertical heated rectangular channel
Procedia PDF Downloads 15611023 The Impact of the Composite Expanded Graphite PCM on the PV Panel Whole Year Electric Output: Case Study Milan
Authors: Hasan A Al-Asadi, Ali Samir, Afrah Turki Awad, Ali Basem
Abstract:
Integrating the phase change material (PCM) with photovoltaic (PV) panels is one of the effective techniques to minimize the PV panel temperature and increase their electric output. In order to investigate the impact of the PCM on the electric output of the PV panels for a whole year, a lumped-distributed parameter model for the PV-PCM module has been developed. This development has considered the impact of the PCM density variation between the solid phase and liquid phase. This contribution will increase the assessment accuracy of the electric output of the PV-PCM module. The second contribution is to assess the impact of the expanded composite graphite-PCM on the PV electric output in Milan for a whole year. The novel one-dimensional model has been solved using MATLAB software. The results of this model have been validated against literature experiment work. The weather and the solar radiation data have been collected. The impact of expanded graphite-PCM on the electric output of the PV panel for a whole year has been investigated. The results indicate this impact has an enhancement rate of 2.39% for the electric output of the PV panel in Milan for a whole year.Keywords: PV panel efficiency, PCM, numerical model, solar energy
Procedia PDF Downloads 17711022 An Adaptive CFAR Algorithm Based on Automatic Censoring in Heterogeneous Environments
Authors: Naime Boudemagh
Abstract:
In this work, we aim to improve the detection performances of radar systems. To this end, we propose and analyze a novel censoring technique of undesirable samples, of priori unknown positions, that may be present in the environment under investigation. Therefore, we consider heterogeneous backgrounds characterized by the presence of some irregularities such that clutter edge transitions and/or interfering targets. The proposed detector, termed automatic censoring constant false alarm (AC-CFAR), operates exclusively in a Gaussian background. It is built to allow the segmentation of the environment to regions and switch automatically to the appropriate detector; namely, the cell averaging CFAR (CA-CFAR), the censored mean level CFAR (CMLD-CFAR) or the order statistic CFAR (OS-CFAR). Monte Carlo simulations show that the AC-CFAR detector performs like the CA-CFAR in a homogeneous background. Moreover, the proposed processor exhibits considerable robustness in a heterogeneous background.Keywords: CFAR, automatic censoring, heterogeneous environments, radar systems
Procedia PDF Downloads 60311021 Numerical Investigation of a Supersonic Ejector for Refrigeration System
Authors: Karima Megdouli, Bourhan Taschtouch
Abstract:
Supersonic ejectors have many applications in refrigeration systems. And improving ejector performance is the key to improve the efficiency of these systems. One of the main advantages of the ejector is its geometric simplicity and the absence of moving parts. This paper presents a theoretical model for evaluating the performance of a new supersonic ejector configuration for refrigeration system applications. The relationship between the flow field and the key parameters of the new configuration has been illustrated by analyzing the Mach number and flow velocity contours. The method of characteristics (MOC) is used to design the supersonic nozzle of the ejector. The results obtained are compared with those obtained by CFD. The ejector is optimized by minimizing exergy destruction due to irreversibility and shock waves. The optimization converges to an efficient optimum solution, ensuring improved and stable performance over the whole considered range of uncertain operating conditions.Keywords: supersonic ejector, theoretical model, CFD, optimization, performance
Procedia PDF Downloads 8311020 Long-Term Field Performance of Paving Fabric Interlayer Systems to Reduce Reflective Cracking
Authors: Farshad Amini, Kejun Wen
Abstract:
The formation of reflective cracking of pavement overlays has confronted highway engineers for many years. Stress-relieving interlayers, such as paving fabrics, have been used in an attempt to reduce or delay reflective cracking. The effectiveness of paving fabrics in reducing reflection cracking is related to joint or crack movement in the underlying pavement, crack width, overlay thickness, subgrade conditions, climate, and traffic volume. The nonwoven geotextiles are installed between the old and new asphalt layers. Paving fabrics enhance performance through two mechanisms: stress relief and waterproofing. Several factors including proper installation, remedial work performed before overlay, overlay thickness, variability of pavement strength, existing pavement condition, base/subgrade support condition, and traffic volume affect the performance. The primary objective of this study was to conduct a long-term monitoring of the paving fabric interlayer systems to evaluate its effectiveness and performance. A comprehensive testing, monitoring, and analysis program were undertaken, where twelve 500-ft pavement sections of a four-lane highway were rehabilitated, and then monitored for seven years. A comparison between the performance of paving fabric treatment systems and control sections is reported. Lessons learned, and the various factors are discussed.Keywords: monitoring, paving fabrics, performance, reflective cracking
Procedia PDF Downloads 33511019 A Paradigm Shift towards Personalized and Scalable Product Development and Lifecycle Management Systems in the Aerospace Industry
Authors: David E. Culler, Noah D. Anderson
Abstract:
Integrated systems for product design, manufacturing, and lifecycle management are difficult to implement and customize. Commercial software vendors, including CAD/CAM and third party PDM/PLM developers, create user interfaces and functionality that allow their products to be applied across many industries. The result is that systems become overloaded with functionality, difficult to navigate, and use terminology that is unfamiliar to engineers and production personnel. For example, manufacturers of automotive, aeronautical, electronics, and household products use similar but distinct methods and processes. Furthermore, each company tends to have their own preferred tools and programs for controlling work and information flow and that connect design, planning, and manufacturing processes to business applications. This paper presents a methodology and a case study that addresses these issues and suggests that in the future more companies will develop personalized applications that fit to the natural way that their business operates. A functioning system has been implemented at a highly competitive U.S. aerospace tooling and component supplier that works with many prominent airline manufacturers around the world including The Boeing Company, Airbus, Embraer, and Bombardier Aerospace. During the last three years, the program has produced significant benefits such as the automatic creation and management of component and assembly designs (parametric models and drawings), the extensive use of lightweight 3D data, and changes to the way projects are executed from beginning to end. CATIA (CAD/CAE/CAM) and a variety of programs developed in C#, VB.Net, HTML, and SQL make up the current system. The web-based platform is facilitating collaborative work across multiple sites around the world and improving communications with customers and suppliers. This work demonstrates that the creative use of Application Programming Interface (API) utilities, libraries, and methods is a key to automating many time-consuming tasks and linking applications together.Keywords: PDM, PLM, collaboration, CAD/CAM, scalable systems
Procedia PDF Downloads 18011018 Effectiveness of the Flavonoids Isolated from Thymus inodorus by Different Solvents against Some Pathogenis Microorganisms
Authors: N. Behidj, K. Benyounes, T. Dahmane, A. Allem
Abstract:
The aim of this study was to investigate the antimicrobial activity of flavonoids isolated from the aerial part of a medicinal plant which is Thymus inodorusby the middle agar diffusion method on following microorganisms. We have Staphylococcus aureus, Escherichia coli, Pseudomonas fluorescens, AspergillusNiger, Aspergillus fumigatus and Candida albicans. During this study, flavonoids extracted by stripping with steam are performed. The yields of flavonoids is 7.242% for the aqueous extract and 28.86% for butanol extract, 29.875% for the extract of ethyl acetate and 22.9% for the extract of di - ethyl. The evaluation of the antibacterial effect shows that the diameter of the zone of inhibition varies from one microorganism to another. The operation values obtained show that the bacterial strain P fluoresces, and 3 yeasts and molds; A. Niger, A. fumigatus and C. albicansare the most resistant. But it is noted that, S. aureus is shown more sensitive to crude extracts, the stock solution and the various dilutions. Finally for the minimum inhibitory concentration is estimated only with the crude extract of Thymus inodorus flavonoid.Indeed, these extracts inhibit the growth of Gram + bacteria at a concentration varying between 0.5% and 1%. While for bacteria to Gram -, it is limited to a concentration of 0.5%.Keywords: antimicrobial activity, organic extracts, aqueous extracts, Thymus numidicus
Procedia PDF Downloads 18911017 A Development of a Simulation Tool for Production Planning with Capacity-Booking at Specialty Store Retailer of Private Label Apparel Firms
Authors: Erika Yamaguchi, Sirawadee Arunyanrt, Shunichi Ohmori, Kazuho Yoshimoto
Abstract:
In this paper, we suggest a simulation tool to make a decision of monthly production planning for maximizing a profit of Specialty store retailer of Private label Apparel (SPA) firms. Most of SPA firms are fabless and make outsourcing deals for productions with factories of their subcontractors. Every month, SPA firms make a booking for production lines and manpower in the factories. The booking is conducted a few months in advance based on a demand prediction and a monthly production planning at that time. However, the demand prediction is updated month by month, and the monthly production planning would change to meet the latest demand prediction. Then, SPA firms have to change the capacities initially booked within a certain range to suit to the monthly production planning. The booking system is called “capacity-booking”. These days, though it is an issue for SPA firms to make precise monthly production planning, many firms are still conducting the production planning by empirical rules. In addition, it is also a challenge for SPA firms to match their products and factories with considering their demand predictabilities and regulation abilities. In this paper, we suggest a model for considering these two issues. An objective is to maximize a total profit of certain periods, which is sales minus costs of production, inventory, and capacity-booking penalty. To make a better monthly production planning at SPA firms, these points should be considered: demand predictabilities by random trends, previous and next month’s production planning of the target month, and regulation abilities of the capacity-booking. To decide matching products and factories for outsourcing, it is important to consider seasonality, volume, and predictability of each product, production possibility, size, and regulation ability of each factory. SPA firms have to consider these constructions and decide orders with several factories per one product. We modeled these issues as a linear programming. To validate the model, an example of several computational experiments with a SPA firm is presented. We suppose four typical product groups: basic, seasonal (Spring / Summer), seasonal (Fall / Winter), and spot product. As a result of the experiments, a monthly production planning was provided. In the planning, demand predictabilities from random trend are reduced by producing products which are different product types. Moreover, priorities to produce are given to high-margin products. In conclusion, we developed a simulation tool to make a decision of monthly production planning which is useful when the production planning is set every month. We considered the features of capacity-booking, and matching of products and factories which have different features and conditions.Keywords: capacity-booking, SPA, monthly production planning, linear programming
Procedia PDF Downloads 52111016 A Palmprint Identification System Based Multi-Layer Perceptron
Authors: David P. Tantua, Abdulkader Helwan
Abstract:
Biometrics has been recently used for the human identification systems using the biological traits such as the fingerprints and iris scanning. Identification systems based biometrics show great efficiency and accuracy in such human identification applications. However, these types of systems are so far based on some image processing techniques only, which may decrease the efficiency of such applications. Thus, this paper aims to develop a human palmprint identification system using multi-layer perceptron neural network which has the capability to learn using a backpropagation learning algorithms. The developed system uses images obtained from a public database available on the internet (CASIA). The processing system is as follows: image filtering using median filter, image adjustment, image skeletonizing, edge detection using canny operator to extract features, clear unwanted components of the image. The second phase is to feed those processed images into a neural network classifier which will adaptively learn and create a class for each different image. 100 different images are used for training the system. Since this is an identification system, it should be tested with the same images. Therefore, the same 100 images are used for testing it, and any image out of the training set should be unrecognized. The experimental results shows that this developed system has a great accuracy 100% and it can be implemented in real life applications.Keywords: biometrics, biological traits, multi-layer perceptron neural network, image skeletonizing, edge detection using canny operator
Procedia PDF Downloads 376