Search results for: Optimal Computing Budget Allocation
1634 Optimization of Strategies and Models Review for Optimal Technologies - Based On Fuzzy Schemes for Green Architecture
Authors: Ghada Elshafei, Abdelazim Negm
Abstract:
Recently, the green architecture becomes a significant way to a sustainable future. Green building designs involve finding the balance between comfortable homebuilding and sustainable environment. Moreover, the utilization of the new technologies such as artificial intelligence techniques are used to complement current practices in creating greener structures to keep the built environment more sustainable. The most common objectives in green buildings should be designed to minimize the overall impact of the built environment that effect on ecosystems in general and in particularly human health and natural environment. This will lead to protecting occupant health, improving employee productivity, reducing pollution and sustaining the environmental. In green building design, multiple parameters which may be interrelated, contradicting, vague and of qualitative/quantitative nature are broaden to use. This paper presents a comprehensive critical state- ofart- review of current practices based on fuzzy and its combination techniques. Also, presented how green architecture/building can be improved using the technologies that been used for analysis to seek optimal green solutions strategies and models to assist in making the best possible decision out of different alternatives.
Keywords: Green architecture/building, technologies, optimization, strategies, fuzzy techniques and models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25211633 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks
Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev
Abstract:
One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.
Keywords: Channel estimation, inter-cell interference, pilot contamination attacks, wireless communications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6761632 Statistical Distributions of the Lapped Transform Coefficients for Images
Authors: Vijay Kumar Nath, Deepika Hazarika, Anil Mahanta,
Abstract:
Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.
Keywords: Lapped orthogonal transform, Lapped biorthogonal transform, Image compression, KS test,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16041631 Modelling an Investment Portfolio with Mandatory and Voluntary Contributions under M-CEV Model
Authors: Amadi Ugwulo Chinyere, Lewis D. Gbarayorks, Emem N. H. Inamete
Abstract:
In this paper, the mandatory contribution, additional voluntary contribution (AVC) and administrative charges are merged together to determine the optimal investment strategy (OIS) for a pension plan member (PPM) in a defined contribution (DC) pension scheme under the modified constant elasticity of variance (M-CEV) model. We assume that the voluntary contribution is a stochastic process and a portfolio consisting of one risk free asset and one risky asset modeled by the M-CEV model is considered. Also, a stochastic differential equation consisting of PPM’s monthly contributions, voluntary contributions and administrative charges is obtained. More so, an optimization problem in the form of Hamilton Jacobi Bellman equation which is a nonlinear partial differential equation is obtained. Using power transformation and change of variables method, an explicit solution of the OIS and the value function are obtained under constant absolute risk averse (CARA). Furthermore, numerical simulations on the impact of some sensitive parameters on OIS were discussed extensively. Finally, our result generalizes some existing result in the literature.
Keywords: DC pension fund, modified constant elasticity of variance, optimal investment strategies, voluntary contribution, administrative charges.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3721630 Coupled Spacecraft Orbital and Attitude Modeling and Simulation in Multi-Complex Modes
Authors: Amr Abdel Azim Ali, G. A. Elsheikh, Moutaz Hegazy
Abstract:
This paper presents verification of a modeling and simulation for a Spacecraft (SC) attitude and orbit control system. Detailed formulation of coupled SC orbital and attitude equations of motion is performed in order to achieve accepted accuracy to meet the requirements of multitargets tracking and orbit correction complex modes. Correction of the target parameter based on the estimated state vector during shooting time to enhance pointing accuracy is considered. Time-optimal nonlinear feedback control technique was used in order to take full advantage of the maximum torques that the controller can deliver. This simulation provides options for visualizing SC trajectory and attitude in a 3D environment by including an interface with V-Realm Builder and VR Sink in Simulink/MATLAB. Verification data confirms the simulation results, ensuring that the model and the proposed control law can be used successfully for large and fast tracking and is robust enough to keep the pointing accuracy within the desired limits with considerable uncertainty in inertia and control torque.Keywords: Attitude and orbit control, time-optimal nonlinear feedback control, modeling and simulation, pointing accuracy, maximum torques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13131629 A Lifetime-Guaranteed Routing Scheme in Wireless Sensor Networks
Authors: Jae Keun Park, Sung Je Hong, Kyong Hoon Kim, Tae Heum Kang, Wan Yeon Lee
Abstract:
In this paper, we propose a routing scheme that guarantees the residual lifetime of wireless sensor networks where each sensor node operates with a limited budget of battery energy. The scheme maximizes the communications QoS while sustaining the residual battery lifetime of the network for a specified duration. Communication paths of wireless nodes are translated into a directed acyclic graph(DAG) and the maximum-flow algorithm is applied to the graph. The found maximum flow are assigned to sender nodes, so as to maximize their communication QoS. Based on assigned flows, the scheme determines the routing path and the transmission rate of data packet so that any sensor node on the path would not exhaust its battery energy before a specified duration.Keywords: Sensor network, battery, residual lifetime, routingscheme, QoS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16311628 Application of Griddization Management to Construction Hazard Management
Authors: Lingzhi Li, Jiankun Zhang, Tiantian Gu
Abstract:
Hazard management that can prevent fatal accidents and property losses is a fundamental process during the buildings’ construction stage. However, due to lack of safety supervision resources and operational pressures, the conduction of hazard management is poor and ineffective in China. In order to improve the quality of construction safety management, it is critical to explore the use of information technologies to ensure that the process of hazard management is efficient and effective. After exploring the existing problems of construction hazard management in China, this paper develops the griddization management model for construction hazard management. First, following the knowledge grid infrastructure, the griddization computing infrastructure for construction hazards management is designed which includes five layers: resource entity layer, information management layer, task management layer, knowledge transformation layer and application layer. This infrastructure will be as the technical support for realizing grid management. Second, this study divides the construction hazards into grids through city level, district level and construction site level according to grid principles. Last, a griddization management process including hazard identification, assessment and control is developed. Meanwhile, all stakeholders of construction safety management, such as owners, contractors, supervision organizations and government departments, should take the corresponding responsibilities in this process. Finally, a case study based on actual construction hazard identification, assessment and control is used to validate the effectiveness and efficiency of the proposed griddization management model. The advantage of this designed model is to realize information sharing and cooperative management between various safety management departments.
Keywords: Construction hazard, grid management, griddization computing, process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15761627 Optimization of Microwave-Assisted Extraction of Cherry Laurel (Prunus laurocerasus L.) Fruit Using Response Surface Methodology
Authors: Ivana T. Karabegović, Saša S. Stojičević, Dragan T. Veličković, Nada Č. Nikolić, Miodrag L. Lazić
Abstract:
Optimization of a microwave-assisted extraction of cherry laurel (Prunus laurocerasus) fruit using methanol was studied. The influence of process parameters (microwave power, plant material-to-solvent ratio and the extraction time) on the extraction efficiency were optimized by using response surface methodology. The predicted maximum yield of extractive substances (41.85 g/100 g fresh plant material) was obtained at microwave power of 600 W and plant material to solvent ratio of 0.2 g/cm3 after 26 minutes of extraction, while a mean value of 40.80±0.41 g/100 g fresh plant material was obtained from laboratory experiments. This proves applicability of the model in predicting optimal extraction conditions with minimal laborious and time consuming. The results indicated that all process parameters were effective on the extraction efficiency, while the most important factor was extraction time. In order to rationalize production the optimal economical condition which gave a large total extract yield with minimal energy and solvent consumption was found.
Keywords: Cherry laurel, Extraction, Multiple regression modeling, Microwave.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22301626 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks
Authors: Wang Yichen, Haruka Yamashita
Abstract:
In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.Keywords: Recurrent Neural Network, players lineup, basketball data, decision making model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8261625 Decision Location and Resource Requirement for Relief Goods Assembly
Authors: Glenda Minguito, Jenith Banluta
Abstract:
One of the critical aspects of humanitarian operations is the distribution of relief goods to an affected community. The common assumption is that relief goods are prepositioned during disasters which are not applicable in developing countries like the Philippines. During disasters, the on-the-ground government agencies and responders have to procure, sort, weigh and pack the relief goods. There is a need to review the relief goods preparation as it seriously affects the delivery of necessary aid for human survival. This study also identifies the ideal location of the assembly hub to minimize the distance to the affected community. This paper reveals that location and resources are dependent on the type of disasters encountered at the local level. The Center-of-Gravity method and Multiple Activity Chart were applied in the analysis.
Keywords: Humanitarian supply chain, location decision, resource allocation, local level.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4541624 Collaborative Environmental Management: A Case Study Research of Stakeholders’ Collaboration in the Nigerian Oil-producing Region
Authors: Favour Makuochukwu Orji, Yingkui Zhao
Abstract:
A myriad of environmental issues face the Nigerian industrial region, resulting from; oil and gas production, mining, manufacturing and domestic wastes. Amidst these, much effort has been directed by stakeholders in the Nigerian oil producing regions, because of the impacts of the region on the wider Nigerian economy. Although collaborative environmental management has been noted as an effective approach in managing environmental issues, little attention has been given to the roles and practices of stakeholders in effecting a collaborative environmental management framework for the Nigerian oil-producing region. This paper produces a framework to expand and deepen knowledge relating to stakeholders aspects of collaborative roles in managing environmental issues in the Nigeria oil-producing region. The knowledge is derived from analysis of stakeholders’ practices – studied through multiple case studies using document analysis. Selected documents of key stakeholders – Nigerian government agencies, multi-national oil companies and host communities, were analyzed. Open and selective coding was employed manually during document analysis of data collected from the offices and websites of the stakeholders. The findings showed that the stakeholders have a range of roles, practices, interests, drivers and barriers regarding their collaborative roles in managing environmental issues. While they have interests for efficient resource use, compliance to standards, sharing of responsibilities, generating of new solutions, and shared objectives; there is evidence of major barriers and these include resource allocation, disjointed policy, ineffective monitoring, diverse socio- economic interests, lack of stakeholders’ commitment and limited knowledge sharing. However, host communities hold deep concerns over the collaborative roles of stakeholders for economic interests, particularly, where government agencies and multi-national oil companies are involved. With these barriers and concerns, a genuine stakeholders’ collaboration is found to be limited, and as a result, optimal environmental management practices and policies have not been successfully implemented in the Nigeria oil-producing region. A framework is produced that describes practices that characterize collaborative environmental management might be employed to satisfy the stakeholders’ interests. The framework recommends critical factors, based on the findings, which may guide a collaborative environmental management in the oil producing regions. The recommendations are designed to re-define the practices of stakeholders in managing environmental issues in the oil producing regions, not as something wholly new, but as an approach essential for implementing a sustainable environmental policy. This research outcome may clarify areas for future research as well as to contribute to industry guidance in the area of collaborative environmental management.Keywords: Collaborative environmental management framework, document analysis, case studies, multinational oil companies, Nigerian oil-producing region, stakeholders analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24881623 Influence of Taguchi Selected Parameters on Properties of CuO-ZrO2 Nanoparticles Produced via Sol-gel Method
Authors: H. Abdizadeh, Y. Vahidshad
Abstract:
The present paper discusses the selection of process parameters for obtaining optimal nanocrystallites size in the CuOZrO2 catalyst. There are some parameters changing the inorganic structure which have an influence on the role of hydrolysis and condensation reaction. A statistical design test method is implemented in order to optimize the experimental conditions of CuO-ZrO2 nanoparticles preparation. This method is applied for the experiments and L16 orthogonal array standard. The crystallites size is considered as an index. This index will be used for the analysis in the condition where the parameters vary. The effect of pH, H2O/ precursor molar ratio (R), time and temperature of calcination, chelating agent and alcohol volume are particularity investigated among all other parameters. In accordance with the results of Taguchi, it is found that temperature has the greatest impact on the particle size. The pH and H2O/ precursor molar ratio have low influences as compared with temperature. The alcohol volume as well as the time has almost no effect as compared with all other parameters. Temperature also has an influence on the morphology and amorphous structure of zirconia. The optimal conditions are determined by using Taguchi method. The nanocatalyst is studied by DTA-TG, XRD, EDS, SEM and TEM. The results of this research indicate that it is possible to vary the structure, morphology and properties of the sol-gel by controlling the above-mentioned parameters.Keywords: CuO-ZrO2 Nanoparticles, Sol-gel, Taguchi method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17371622 Selection the Optimum Cooling Scheme for Generators based on the Electro-Thermal Analysis
Authors: Diako Azizi, Ahmad Gholami, Vahid Abbasi
Abstract:
Optimal selection of electrical insulations in electrical machinery insures reliability during operation. From the insulation studies of view for electrical machines, stator is the most important part. This fact reveals the requirement for inspection of the electrical machine insulation along with the electro-thermal stresses. In the first step of the study, a part of the whole structure of machine in which covers the general characteristics of the machine is chosen, then based on the electromagnetic analysis (finite element method), the machine operation is simulated. In the simulation results, the temperature distribution of the total structure is presented simultaneously by using electro-thermal analysis. The results of electro-thermal analysis can be used for designing an optimal cooling system. In order to design, review and comparing the cooling systems, four wiring structures in the slots of Stator are presented. The structures are compared to each other in terms of electrical, thermal distribution and remaining life of insulation by using Finite Element analysis. According to the steps of the study, an optimization algorithm has been presented for selection of appropriate structure.Keywords: Electrical field, field distribution, insulation, winding, finite element method, electro thermal
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17471621 The Enhancement of Training of Military Pilots Using Psychophysiological Methods
Authors: G. Kloudova, M. Stehlik
Abstract:
Optimal human performance is a key goal in the professional setting of military pilots, which is a highly challenging atmosphere. The aviation environment requires substantial cognitive effort and is rich in potential stressors. Therefore, it is important to analyze variables such as mental workload to ensure safe conditions. Pilot mental workload could be measured using several tools, but most of them are very subjective. This paper details research conducted with military pilots using psychophysiological methods such as electroencephalography (EEG) and heart rate (HR) monitoring. The data were measured in a simulator as well as under real flight conditions. All of the pilots were exposed to highly demanding flight tasks and showed big individual response differences. On that basis, the individual pattern for each pilot was created counting different EEG features and heart rate variations. Later on, it was possible to distinguish the most difficult flight tasks for each pilot that should be more extensively trained. For training purposes, an application was developed for the instructors to decide which of the specific tasks to focus on during follow-up training. This complex system can help instructors detect the mentally demanding parts of the flight and enhance the training of military pilots to achieve optimal performance.
Keywords: Cognitive effort, human performance, military pilots, psychophysiological methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11791620 An Efficient Technique for EMI Mitigation in Fluorescent Lamps using Frequency Modulation and Evolutionary Programming
Authors: V.Sekar, T.G.Palanivelu, B.Revathi
Abstract:
Electromagnetic interference (EMI) is one of the serious problems in most electrical and electronic appliances including fluorescent lamps. The electronic ballast used to regulate the power flow through the lamp is the major cause for EMI. The interference is because of the high frequency switching operation of the ballast. Formerly, some EMI mitigation techniques were in practice, but they were not satisfactory because of the hardware complexity in the circuit design, increased parasitic components and power consumption and so on. The majority of the researchers have their spotlight only on EMI mitigation without considering the other constraints such as cost, effective operation of the equipment etc. In this paper, we propose a technique for EMI mitigation in fluorescent lamps by integrating Frequency Modulation and Evolutionary Programming. By the Frequency Modulation technique, the switching at a single central frequency is extended to a range of frequencies, and so, the power is distributed throughout the range of frequencies leading to EMI mitigation. But in order to meet the operating frequency of the ballast and the operating power of the fluorescent lamps, an optimal modulation index is necessary for Frequency Modulation. The optimal modulation index is determined using Evolutionary Programming. Thereby, the proposed technique mitigates the EMI to a satisfactory level without disturbing the operation of the fluorescent lamp.Keywords: Ballast, Electromagnetic interference (EMI), EMImitigation, Evolutionary programming (EP), Fluorescent lamp, Frequency Modulation (FM), Modulation index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22711619 Load Forecasting Using Neural Network Integrated with Economic Dispatch Problem
Authors: Mariyam Arif, Ye Liu, Israr Ul Haq, Ahsan Ashfaq
Abstract:
High cost of fossil fuels and intensifying installations of alternate energy generation sources are intimidating main challenges in power systems. Making accurate load forecasting an important and challenging task for optimal energy planning and management at both distribution and generation side. There are many techniques to forecast load but each technique comes with its own limitation and requires data to accurately predict the forecast load. Artificial Neural Network (ANN) is one such technique to efficiently forecast the load. Comparison between two different ranges of input datasets has been applied to dynamic ANN technique using MATLAB Neural Network Toolbox. It has been observed that selection of input data on training of a network has significant effects on forecasted results. Day-wise input data forecasted the load accurately as compared to year-wise input data. The forecasted load is then distributed among the six generators by using the linear programming to get the optimal point of generation. The algorithm is then verified by comparing the results of each generator with their respective generation limits.
Keywords: Artificial neural networks, demand-side management, economic dispatch, linear programming, power generation dispatch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9131618 The Importance of Project Post-Implementation Reviews
Authors: Catalin-Teodor Dogaru, Ana-Maria Dogaru
Abstract:
Success means different things for different people. For us, project managers, it becomes even harder to actually find a definition. Many factors have to be included in the evaluation. Moreover, literature is not very helpful, lacking consensus and neutrality. Post-implementation reviews (PIR) can be an efficient tool in evaluating how things worked on a certain project. Despite the visible progress, PIR is not a very detailed subject yet and there is not common understanding in this matter. This may be the reason that some organizations include it in the projects’ lifecycle and some do not. Through this paper, we point out the reasons why all project managers should pay proper attention to this important step and to the elements which can be assessed, beside the already famous triple constraints: cost, budget and time. It is essential to take notice that PIR is not a checklist. It brings the edge in eliminating subjectivity and judging projects based on actual proof. Based on our experience, our success indicator model, presented in this paper, contributes to the success of the project! In the same time, it increases trust among customers who will perceive success more objectively.Keywords: Project, post-implementation, success, model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 49181617 A New Heuristic Approach for the Large-Scale Generalized Assignment Problem
Authors: S. Raja Balachandar, K.Kannan
Abstract:
This paper presents a heuristic approach to solve the Generalized Assignment Problem (GAP) which is NP-hard. It is worth mentioning that many researches used to develop algorithms for identifying the redundant constraints and variables in linear programming model. Some of the algorithms are presented using intercept matrix of the constraints to identify redundant constraints and variables prior to the start of the solution process. Here a new heuristic approach based on the dominance property of the intercept matrix to find optimal or near optimal solution of the GAP is proposed. In this heuristic, redundant variables of the GAP are identified by applying the dominance property of the intercept matrix repeatedly. This heuristic approach is tested for 90 benchmark problems of sizes upto 4000, taken from OR-library and the results are compared with optimum solutions. Computational complexity is proved to be O(mn2) of solving GAP using this approach. The performance of our heuristic is compared with the best state-ofthe- art heuristic algorithms with respect to both the quality of the solutions. The encouraging results especially for relatively large size test problems indicate that this heuristic approach can successfully be used for finding good solutions for highly constrained NP-hard problems.
Keywords: Combinatorial Optimization Problem, Generalized Assignment Problem, Intercept Matrix, Heuristic, Computational Complexity, NP-Hard Problems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23471616 Optimal Location of Multi Type Facts Devices for Multiple Contingencies Using Particle Swarm Optimization
Authors: S. Sutha, N. Kamaraj
Abstract:
In deregulated operating regime power system security is an issue that needs due thoughtfulness from researchers in the horizon of unbundling of generation and transmission. Electric power systems are exposed to various contingencies. Network contingencies often contribute to overloading of branches, violation of voltages and also leading to problems of security/stability. To maintain the security of the systems, it is desirable to estimate the effect of contingencies and pertinent control measurement can be taken on to improve the system security. This paper presents the application of particle swarm optimization algorithm to find the optimal location of multi type FACTS devices in a power system in order to eliminate or alleviate the line over loads. The optimizations are performed on the parameters, namely the location of the devices, their types, their settings and installation cost of FACTS devices for single and multiple contingencies. TCSC, SVC and UPFC are considered and modeled for steady state analysis. The selection of UPFC and TCSC suitable location uses the criteria on the basis of improved system security. The effectiveness of the proposed method is tested for IEEE 6 bus and IEEE 30 bus test systems.
Keywords: Contingency Severity Index, Particle Swarm Optimization, Performance Index, Static Security Assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27631615 Virtual Environments...Vehicle for Pedagogical Advancement
Authors: Elizabeth M. Hodge, Sharon K. Collins, Eric Kisling
Abstract:
Virtual environments are a hot topic in academia and more importantly in courses offered via distance education. Today-s gaming generation view virtual worlds as strong social and interactive mediums for communicating and socializing. And while institutions of higher education are challenged with increasing enrollment while balancing budget cuts, offering effective courses via distance education become a valid option. Educators can utilize virtual worlds to offer students an enhanced learning environment which has the power to alleviate feelings of isolation through the promotion of communication, interaction, collaboration, teamwork, feedback, engagement and constructivists learning activities. This paper focuses on the use of virtual environments to facilitate interaction in distance education courses so as to produce positive learning outcomes for students. Furthermore, the instructional strategies were reviewed and discussed for use in virtual worlds to enhance learning within a social context.
Keywords: Virtual Environments, Second Life, Instructional Strategies and Technology
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15531614 Formulation of Mortars with Marine Sediments
Authors: Nor-Edine Abriak, Mouhamadou Amar, Mahfoud Benzerzour
Abstract:
The transition to a more sustainable economy is directed by a reduction in the consumption of raw materials in equivalent production. The recovery of byproducts and especially the dredged sediment as mineral addition in cements matrix represents an alternative to reduce raw material consumption and construction sector’s carbon footprint. However, the efficient use of sediment requires adequate and optimal treatment. Several processing techniques have so far been applied in order to improve some physicochemical properties. The heat treatment by calcination was effective in removing the organic fraction and activates the pozzolanic properties. In this article, the effect of the optimized heat treatment of marine sediments in the physico-mechanical and environmental properties of mortars are shown. A finding is that the optimal substitution of a portion of cement by treated sediments by calcination at 750 °C helps to maintain or improve the mechanical properties of the cement matrix in comparison with a standard reference mortar. The use of calcined sediment enhances mortar behavior in terms of mechanical strength and durability. From an environmental point of view and life cycle, mortars formulated containing treated sediments are considered inert with respect to the inert waste storage facilities reference (ISDI-France).Keywords: Sediment, calcination, cement, reuse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8861613 The Future of Electronic Money
Authors: Maria E. de Boyrie, Darlene Nelson, James A. Nelson
Abstract:
The history of money is described in relationship to the history of computing. With the transformation and acceptance of money as information, major challenges to the security of money have involved engineering, computer science, and management. Research opportunities and challenges are described as money continues its transformation into information.Keywords: Electronic, information, money, risk.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18971612 Parametric Analysis in the Electronic Sensor Frequency Adjustment Process
Authors: Rungchat Chompu-Inwai, Akararit Charoenkasemsuk
Abstract:
The use of electronic sensors in the electronics industry has become increasingly popular over the past few years, and it has become a high competition product. The frequency adjustment process is regarded as one of the most important process in the electronic sensor manufacturing process. Due to inaccuracies in the frequency adjustment process, up to 80% waste can be caused due to rework processes; therefore, this study aims to provide a preliminary understanding of the role of parameters used in the frequency adjustment process, and also make suggestions in order to further improve performance. Four parameters are considered in this study: air pressure, dispensing time, vacuum force, and the distance between the needle tip and the product. A full factorial design for experiment 2k was considered to determine those parameters that significantly affect the accuracy of the frequency adjustment process, where a deviation in the frequency after adjustment and the target frequency is expected to be 0 kHz. The experiment was conducted on two levels, using two replications and with five center-points added. In total, 37 experiments were carried out. The results reveal that air pressure and dispensing time significantly affect the frequency adjustment process. The mathematical relationship between these two parameters was formulated, and the optimal parameters for air pressure and dispensing time were found to be 0.45 MPa and 458 ms, respectively. The optimal parameters were examined by carrying out a confirmation experiment in which an average deviation of 0.082 kHz was achieved.Keywords: Design of Experiment, Electronic Sensor, Frequency Adjustment, Parametric Analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13961611 Optimal Green Facility Planning - Implementation of Organic Rankine Cycle System for Factory Waste Heat Recovery
Authors: Chun-Wei Lin, Yu-Lin Chen
Abstract:
As global industry developed rapidly, the energy demand also rises simultaneously. In the production process, there’s a lot of energy consumed in the process. Formally, the energy used in generating the heat in the production process. In the total energy consumption, 40% of the heat was used in process heat, mechanical work, chemical energy and electricity. The remaining 50% were released into the environment. It will cause energy waste and environment pollution. There are many ways for recovering the waste heat in factory. Organic Rankine Cycle (ORC) system can produce electricity and reduce energy costs by recovering the waste of low temperature heat in the factory. In addition, ORC is the technology with the highest power generating efficiency in low-temperature heat recycling. However, most of factories executives are still hesitated because of the high implementation cost of the ORC system, even a lot of heat are wasted. Therefore, this study constructs a nonlinear mathematical model of waste heat recovery equipment configuration to maximize profits. A particle swarm optimization algorithm is developed to generate the optimal facility installation plan for the ORC system.
Keywords: Green facility planning, organic rankine cycle, particle swarm optimization, waste heat recovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19871610 STLF Based on Optimized Neural Network Using PSO
Authors: H. Shayeghi, H. A. Shayanfar, G. Azimi
Abstract:
The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Keywords: Large Neural Network, Short-Term Load Forecasting, Particle Swarm Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22231609 Simulation Method for Determining the Thermally Induced Displacement of Machine Tools – Experimental Validation and Utilization in the Design Process
Abstract:
A novel simulation method to determine the displacements of machine tools due to thermal factors is presented. The specific characteristic of this method is the employment of original CAD data from the design process chain, which is interpreted by an algorithm in terms of geometry-based allocation of convection and radiation parameters. Furthermore analogous models relating to the thermal behaviour of machine elements are automatically implemented, which were gained by extensive experimental testing with thermography imaging. With this a transient simulation of the thermal field and in series of the displacement of the machine tool is possible simultaneously during the design phase. This method was implemented and is already used industrially in the design of machining centres in order to improve the quality of herewith manufactured workpieces.
Keywords: Accuracy, design process, finite element analysis, machine tools, thermal simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20821608 Fluid Flow and Heat Transfer Structures of Oscillating Pipe Flows
Authors: Yan Su, Jane H. Davidson, F. A. Kulacki
Abstract:
The RANS method with Saffman-s turbulence model was employed to solve the time-dependent turbulent Navier-Stokes and energy equations for oscillating pipe flows. The method of partial sums of the Fourier series is used to analyze the harmonic velocity and temperature results. The complete structures of the oscillating pipe flows and the averaged Nusselt numbers on the tube wall are provided by numerical simulation over wide ranges of ReA and ReR. Present numerical code is validated by comparing the laminar flow results to analytic solutions and turbulence flow results to published experimental data at lower and higher Reynolds numbers respectively. The effects of ReA and ReR on the velocity, temperature and Nusselt number distributions have been di scussed. The enhancement of the heat transfer due to oscillating flows has also been presented. By the way of analyzing the overall Nusselt number over wide ranges of the Reynolds number Re and Keulegan- Carpenter number KC, the optimal ratio of the tube diameter over the oscillation amplitude is obtained based on the existence of a nearly constant optimal KC number. The potential application of the present results in sea water cooling has also been discussed.Keywords: Keulegan-Carpenter number, Nusselt number, Oscillating pipe flows, Reynolds number
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24741607 Intelligent Neural Network Based STLF
Authors: H. Shayeghi, H. A. Shayanfar, G. Azimi
Abstract:
Short-Term Load Forecasting (STLF) plays an important role for the economic and secure operation of power systems. In this paper, Continuous Genetic Algorithm (CGA) is employed to evolve the optimum large neural networks structure and connecting weights for one-day ahead electric load forecasting problem. This study describes the process of developing three layer feed-forward large neural networks for load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. We find good performance for the large neural networks. The proposed methodology gives lower percent errors all the time. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Keywords: Feed-forward Large Neural Network, Short-TermLoad Forecasting, Continuous Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18291606 An Identification Method of Geological Boundary Using Elastic Waves
Authors: Masamitsu Chikaraishi, Mutsuto Kawahara
Abstract:
This paper focuses on a technique for identifying the geological boundary of the ground strata in front of a tunnel excavation site using the first order adjoint method based on the optimal control theory. The geological boundary is defined as the boundary which is different layers of elastic modulus. At tunnel excavations, it is important to presume the ground situation ahead of the cutting face beforehand. Excavating into weak strata or fault fracture zones may cause extension of the construction work and human suffering. A theory for determining the geological boundary of the ground in a numerical manner is investigated, employing excavating blasts and its vibration waves as the observation references. According to the optimal control theory, the performance function described by the square sum of the residuals between computed and observed velocities is minimized. The boundary layer is determined by minimizing the performance function. The elastic analysis governed by the Navier equation is carried out, assuming the ground as an elastic body with linear viscous damping. To identify the boundary, the gradient of the performance function with respect to the geological boundary can be calculated using the adjoint equation. The weighed gradient method is effectively applied to the minimization algorithm. To solve the governing and adjoint equations, the Galerkin finite element method and the average acceleration method are employed for the spatial and temporal discretizations, respectively. Based on the method presented in this paper, the different boundary of three strata can be identified. For the numerical studies, the Suemune tunnel excavation site is employed. At first, the blasting force is identified in order to perform the accuracy improvement of analysis. We identify the geological boundary after the estimation of blasting force. With this identification procedure, the numerical analysis results which almost correspond with the observation data were provided.
Keywords: Parameter identification, finite element method, average acceleration method, first order adjoint equation method, weighted gradient method, geological boundary, navier equation, optimal control theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15821605 Synthesis and Physicochemical Characterization of Biomimetic Scaffold of Gelatin/Zn-Incorporated 58S Bioactive Glass
Authors: Seyed Mohammad Hosseini, Amirhossein Moghanian
Abstract:
The main purpose of this research was to design a biomimetic system by freeze-drying method for evaluating the effect of adding 5 and 10 mol. % of zinc (Zn) in 58S bioactive glass and gelatin (5ZnBG/G and 10ZnBG/G) in terms of structural and biological changes. The structural analyses of samples were performed by X-Ray Diffraction (XRD), scanning electron microscopy (SEM) and Fourier-transform infrared (FTIR) spectroscopy. Also, 3-(4,5dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and alkaline phosphatase (ALP) activity tests were carried out for investigation of MC3T3-E1 cell behaviors. The SEM results demonstrated the spherical shape of the formed hydroxyapatite (HA) phases and also HA characteristic peaks were detected by XRD spectroscopy after 3 days of immersion in the simulated body fluid (SBF) solution. Meanwhile, FTIR spectra proved that the intensity of P–O peaks for 5ZnBG/G was more than 10ZnBG/G and control samples. Moreover, the results of ALP activity test illustrated that the optimal amount of Zn (5ZnBG/G) caused a considerable enhancement in bone cell growth. Taken together, the scaffold with 5 mol.% Zn was introduced as an optimal sample because of its higher biocompatibility, in vitro bioactivity and growth of MC3T3-E1 cells in comparison with other samples in bone tissue engineering.
Keywords: Scaffold, gelatin, modified bioactive glass, ALP, bone tissue engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 408