Search results for: optimization process
3559 A Life Cycle Assessment (LCA) of Aluminum Production Process
Authors: Alaa Al Hawari, Mohammad Khader, Wael El Hasan, Mahmoud Alijla, Ammar Manawi, Abdelbaki Benamour
Abstract:
The production of aluminum alloys and ingots – starting from the processing of alumina to aluminum, and the final cast product – was studied using a Life Cycle Assessment (LCA) approach. The studied aluminum supply chain consisted of a carbon plant, a reduction plant, a casting plant, and a power plant. In the LCA model, the environmental loads of the different plants for the production of 1 ton of aluminum metal were investigated. The impact of the aluminum production was assessed in eight impact categories. The results showed that for all of the impact categories the power plant had the highest impact only in the cases of Human Toxicity Potential (HTP) the reduction plant had the highest impact and in the Marine Aquatic Eco-Toxicity Potential (MAETP) the carbon plant had the highest impact. Furthermore, the impact of the carbon plant and the reduction plant combined was almost the same as the impact of the power plant in the case of the Acidification Potential (AP). The carbon plant had a positive impact on the environment when it come to the Eutrophication Potential (EP) due to the production of clean water in the process. The natural gas based power plant used in the case study had 8.4 times less negative impact on the environment when compared to the heavy fuel based power plant and 10.7 times less negative impact when compared to the hard coal based power plant.
Keywords: Life cycle assessment, aluminum production, Supply chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46473558 Effect of Flowrate and Coolant Temperature on the Efficiency of Progressive Freeze Concentration on Simulated Wastewater
Authors: M. Jusoh, R. Mohd Yunus, M. A. Abu Hassan
Abstract:
Freeze concentration freezes or crystallises the water molecules out as ice crystals and leaves behind a highly concentrated solution. In conventional suspension freeze concentration where ice crystals formed as a suspension in the mother liquor, separation of ice is difficult. The size of the ice crystals is still very limited which will require usage of scraped surface heat exchangers, which is very expensive and accounted for approximately 30% of the capital cost. This research is conducted using a newer method of freeze concentration, which is progressive freeze concentration. Ice crystals were formed as a layer on the designed heat exchanger surface. In this particular research, a helical structured copper crystallisation chamber was designed and fabricated. The effect of two operating conditions on the performance of the newly designed crystallisation chamber was investigated, which are circulation flowrate and coolant temperature. The performance of the design was evaluated by the effective partition constant, K, calculated from the volume and concentration of the solid and liquid phase. The system was also monitored by a data acquisition tool in order to see the temperature profile throughout the process. On completing the experimental work, it was found that higher flowrate resulted in a lower K, which translated into high efficiency. The efficiency is the highest at 1000 ml/min. It was also found that the process gives the highest efficiency at a coolant temperature of -6 °C.Keywords: Freeze concentration, progressive freeze concentration, freeze wastewater treatment, ice crystals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21763557 A New Fast Intra Prediction Mode Decision Algorithm for H.264/AVC Encoders
Authors: A. Elyousfi, A. Tamtaoui, E. Bouyakhf
Abstract:
The H.264/AVC video coding standard contains a number of advanced features. Ones of the new features introduced in this standard is the multiple intramode prediction. Its function exploits directional spatial correlation with adjacent block for intra prediction. With this new features, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standard, but computational complexity is increased significantly when brut force rate distortion optimization (RDO) algorithm is used. In this paper, we propose a new fast intra prediction mode decision method for the complexity reduction of H.264 video coding. for luma intra prediction, the proposed method consists of two step: in the first step, we make the RDO for four mode of intra 4x4 block, based the distribution of RDO cost of those modes and the idea that the fort correlation with adjacent mode, we select the best mode of intra 4x4 block. In the second step, we based the fact that the dominating direction of a smaller block is similar to that of bigger block, the candidate modes of 8x8 blocks and 16x16 macroblocks are determined. So, in case of chroma intra prediction, the variance of the chroma pixel values is much smaller than that of luma ones, since our proposed uses only the mode DC. Experimental results show that the new fast intra mode decision algorithm increases the speed of intra coding significantly with negligible loss of PSNR.
Keywords: Intra prediction, H264/AVC, video coding, encodercomplexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25063556 Grading Fourteen Zones of Isfahan in Terms of the Impact of Globalization on the Urban Fabric of the City, Using the TOPSIS Model
Authors: A. Zahedi Yeganeh, A. Khademolhosseini, R. Mokhtari Malekabadi
Abstract:
Undoubtedly one of the most far-reaching and controversial topics considered in the past few decades, has been globalization. Globalization lies in the essence of the modern culture. It is a complex and rapidly expanding network of links and mutual interdependence that is an aspect of modern life; though some argue that this link existed since the beginning of human history. If we consider globalization as a dynamic social process in which the geographical constraints governing the political, economic, social and cultural relationships have been undermined, it might not be possible to simply describe its impact on the urban fabric. But since in this phenomenon the increase in communications of societies (while preserving the main cultural - regional characteristics) with one another and the increase in the possibility of influencing other societies are discussed, the need for more studies will be felt. The main objective of this study is to grade based on some globalization factors on urban fabric applying the TOPSIS model. The research method is descriptive - analytical and survey. For data analysis, the TOPSIS model and SPSS software were used and the results of GIS software with fourteen cities are shown on the map. The results show that the process of being influenced by the globalization of the urban fabric of fourteen zones of Isfahan was not similar and there have been large differences in this respect between city zones; the most affected areas are zones 5, 6 and 9 of the municipality and the least impact has been on the zones 4 and 3 and 2.
Keywords: Grading, Globalization, Urban fabric, 14 zones of Isfahan, TOPSIS model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19883555 Deep Reinforcement Learning Approach for Trading Automation in the Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
Deep Reinforcement Learning (DRL) algorithms can scale to previously intractable problems. The automation of profit generation in the stock market is possible using DRL, by combining the financial assets price ”prediction” step and the ”allocation” step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. This work represents a DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem as a Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. We then solved the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm and achieved a 2.68 Sharpe ratio on the test dataset. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of DRL in financial markets over other types of machine learning and proves its credibility and advantages of strategic decision-making.
Keywords: Autonomous agent, deep reinforcement learning, MDP, sentiment analysis, stock market, technical indicators, twin delayed deep deterministic policy gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5243554 Asynchronous Parallel Distributed Genetic Algorithm with Elite Migration
Authors: Kazunori Kojima, Masaaki Ishigame, Goutam Chakraborty, Hiroshi Hatsuo, Shozo Makino
Abstract:
In most of the popular implementation of Parallel GAs the whole population is divided into a set of subpopulations, each subpopulation executes GA independently and some individuals are migrated at fixed intervals on a ring topology. In these studies, the migrations usually occur 'synchronously' among subpopulations. Therefore, CPUs are not used efficiently and the communication do not occur efficiently either. A few studies tried asynchronous migration but it is hard to implement and setting proper parameter values is difficult. The aim of our research is to develop a migration method which is easy to implement, which is easy to set parameter values, and which reduces communication traffic. In this paper, we propose a traffic reduction method for the Asynchronous Parallel Distributed GA by migration of elites only. This is a Server-Client model. Every client executes GA on a subpopulation and sends an elite information to the server. The server manages the elite information of each client and the migrations occur according to the evolution of sub-population in a client. This facilitates the reduction in communication traffic. To evaluate our proposed model, we apply it to many function optimization problems. We confirm that our proposed method performs as well as current methods, the communication traffic is less, and setting of the parameters are much easier.Keywords: Parallel Distributed Genetic Algorithm (PDGA), asynchronousPDGA, Server-Client configuration, Elite Migration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13723553 Numerical Simulation on Deformation Behaviour of Additively Manufactured AlSi10Mg Alloy
Authors: Racholsan Raj Nirmal, B. S. V. Patnaik, R. Jayaganthan
Abstract:
The deformation behaviour of additively manufactured AlSi10Mg alloy under low strains, high strain rates and elevated temperature conditions is essential to analyse and predict its response against dynamic loading such as impact and thermomechanical fatigue. The constitutive relation of Johnson-Cook is used to capture the strain rate sensitivity and thermal softening effect in AlSi10Mg alloy. Johnson-Cook failure model is widely used for exploring damage mechanics and predicting the fracture in many materials. In this present work, Johnson-Cook material and damage model parameters for additively manufactured AlSi10Mg alloy have been determined numerically from four types of uniaxial tensile test. Three different uniaxial tensile tests with dynamic strain rates (0.1, 1, 10, 50, and 100 s-1) and elevated temperature tensile test with three different temperature conditions (450 K, 500 K and 550 K) were performed on 3D printed AlSi10Mg alloy in ABAQUS/Explicit. Hexahedral elements are used to discretize tensile specimens and fracture energy value of 43.6 kN/m was used for damage initiation. Levenberg Marquardt optimization method was used for the evaluation of Johnson-Cook model parameters. It was observed that additively manufactured AlSi10Mg alloy has shown relatively higher strain rate sensitivity and lower thermal stability as compared to the other Al alloys.
Keywords: ABAQUS, additive manufacturing, AlSi10Mg, Johnson-Cook model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11373552 A Case Study of the Digital Translation of the Lucy Lloyd and Wilhelm Bleek |Xam and !Kun Notebooks into The Digital Bleek and Lloyd
Authors: F. Saptouw
Abstract:
This paper will examine the digitization process of the |Xam and !Kun notebooks, authored by Lucy Lloyd, Dorothea Bleek and Wilhelm Bleek, and their collaborators |a!kunta, ||kabbo, ≠kasin, Dia!kwain, !kweiten ta ||ken, |han≠kass'o, !nanni, Tamme, |uma, and Da during the 19th century. Detail will be provided about the status of the archive, the creation of the digital archive and selected research projects linked to the archive. The Digital Bleek and Lloyd project is an example of institutional collaboration by the University of Cape Town, University of South Africa, Iziko South African Museum, the National Library of South Africa and the Western Cape Provincial Archives and Records Service. The contemporary value of the archive will be discussed in relation to its current manifestation as a collection of archival and digital objects, each with its own set of properties and archival risk factors. This tension between the two ways to access the archive will be interrogated to shed light on the slippages between the digital object and the archival object. The primary argument is that the process of digitization generates an ontological shift in the status of the archival object. The secondary argument is an engagement with practices to curate the encounters with these ontologically shifted objects and how to relate to each as a contemporary viewer. In conclusion this paper will argue for regarding these archival objects according to the interpretive framework utilized to engage secular relics.Keywords: Archive, curatorship, digitization, The Digital Bleek and Lloyd.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5853551 Multi-Stage Multi-Period Production Planning in Wire and Cable Industry
Authors: Mahnaz Hosseinzadeh, Shaghayegh Rezaee Amiri
Abstract:
This paper presents a methodology for serial production planning problem in wire and cable manufacturing process that addresses the problem of input-output imbalance in different consecutive stations, hoping to minimize the halt of machines in each stage. To this end, a linear Goal Programming (GP) model is developed, in which four main categories of constraints as per the number of runs per machine, machines’ sequences, acceptable inventories of machines at the end of each period, and the necessity of fulfillment of the customers’ orders are considered. The model is formulated based upon on the real data obtained from IKO TAK Company, an important supplier of wire and cable for oil and gas and automotive industries in Iran. By solving the model in GAMS software the optimal number of runs, end-of-period inventories, and the possible minimum idle time for each machine are calculated. The application of the numerical results in the target company has shown the efficiency of the proposed model and the solution in decreasing the lead time of the end product delivery to the customers by 20%. Accordingly, the developed model could be easily applied in wire and cable companies for the aim of optimal production planning to reduce the halt of machines in manufacturing stages.
Keywords: Serial manufacturing process, production planning, wire and cable industry, goal programming approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9313550 A Combined Conventional and Differential Evolution Method for Model Order Reduction
Authors: J. S. Yadav, N. P. Patidar, J. Singhai, S. Panda, C. Ardil
Abstract:
In this paper a mixed method by combining an evolutionary and a conventional technique is proposed for reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM). In the conventional technique, the mixed advantages of Mihailov stability criterion and continued Fraction Expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. Then, retaining the numerator polynomial, the denominator polynomial is recalculated by an evolutionary technique. In the evolutionary method, the recently proposed Differential Evolution (DE) optimization technique is employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. The proposed method is illustrated through a numerical example and compared with ROM where both numerator and denominator polynomials are obtained by conventional method to show its superiority.
Keywords: Reduced Order Modeling, Stability, Mihailov Stability Criterion, Continued Fraction Expansions, Differential Evolution, Integral Squared Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21633549 A TIPSO-SVM Expert System for Efficient Classification of TSTO Surrogates
Authors: Ali Sarosh, Dong Yun-Feng, Muhammad Umer
Abstract:
Fully reusable spaceplanes do not exist as yet. This implies that design-qualification for optimized highly-integrated forebody-inlet configuration of booster-stage vehicle cannot be based on archival data of other spaceplanes. Therefore, this paper proposes a novel TIPSO-SVM expert system methodology. A non-trivial problem related to optimization and classification of hypersonic forebody-inlet configuration in conjunction with mass-model of the two-stage-to-orbit (TSTO) vehicle is solved. The hybrid-heuristic machine learning methodology is based on two-step improved particle swarm optimizer (TIPSO) algorithm and two-step support vector machine (SVM) data classification method. The efficacy of method is tested by first evolving an optimal configuration for hypersonic compression system using TIPSO algorithm; thereafter, classifying the results using two-step SVM method. In the first step extensive but non-classified mass-model training data for multiple optimized configurations is segregated and pre-classified for learning of SVM algorithm. In second step the TIPSO optimized mass-model data is classified using the SVM classification. Results showed remarkable improvement in configuration and mass-model along with sizing parameters.
Keywords: TIPSO-SVM expert system, TIPSO algorithm, two-step SVM method, aerothermodynamics, mass-modeling, TSTO vehicle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23183548 The Thermal Properties of Nano Magnesium Hydroxide Blended with LDPE/EVA/Irganox1010 for Insulator Application
Authors: Ahmad Aroziki Abdul Aziz, Sakinah Mohd Alauddin, Ruzitah Mohd Salleh, Mohammed Iqbal Shueb
Abstract:
This paper illustrates the effect of nano Magnesium Hydroxide (MH) loading on the thermal properties of Low Density Polyethylene (LDPE)/Poly (ethylene-co vinyl acetate) (EVA) nano composite. Thermal studies were conducted, as it understanding is vital for preliminary development of new polymeric systems. Thermal analysis of nanocomposite was conducted using thermo gravimetric analysis (TGA), and differential scanning calorimetry (DSC). Major finding of TGA indicated two main stages of degradation process found at (350 ± 25oC) and (480 ± 25oC) respectively. Nano metal filler expressed better fire resistance as it stand over high degree of temperature. Furthermore, DSC analysis provided a stable glass temperature around 51 (±1oC) and captured double melting point at 84 (±2oC) and 108 (±2oC). This binary melting point reflects the modification of nano filler to the polymer matrix forming melting crystals of folded and extended chain. The percent crystallinity of the samples grew vividly with increasing filler content. Overall, increasing the filler loading improved the degradation temperature and weight loss evidently and a better process and phase stability was captured in DSC.
Keywords: Cable and Wire, LDPE/EVA, Nano MH, Nano Particles, Thermal properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30423547 An Agent Oriented Approach to Operational Profile Management
Authors: Sunitha Ramanujam, Hany El Yamany, Miriam A. M. Capretz
Abstract:
Software reliability, defined as the probability of a software system or application functioning without failure or errors over a defined period of time, has been an important area of research for over three decades. Several research efforts aimed at developing models to improve reliability are currently underway. One of the most popular approaches to software reliability adopted by some of these research efforts involves the use of operational profiles to predict how software applications will be used. Operational profiles are a quantification of usage patterns for a software application. The research presented in this paper investigates an innovative multiagent framework for automatic creation and management of operational profiles for generic distributed systems after their release into the market. The architecture of the proposed Operational Profile MAS (Multi-Agent System) is presented along with detailed descriptions of the various models arrived at following the analysis and design phases of the proposed system. The operational profile in this paper is extended to comprise seven different profiles. Further, the criticality of operations is defined using a new composed metrics in order to organize the testing process as well as to decrease the time and cost involved in this process. A prototype implementation of the proposed MAS is included as proof-of-concept and the framework is considered as a step towards making distributed systems intelligent and self-managing.Keywords: Software reliability, Software testing, Metrics, Distributed systems, Multi-agent systems
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18573546 The Automated Soil Erosion Monitoring System (ASEMS)
Authors: George N. Zaimes, Valasia Iakovoglou, Paschalis Koutalakis, Konstantinos Ioannou, Ioannis Kosmadakis, Panagiotis Tsardaklis, Theodoros Laopoulos
Abstract:
The advancements in technology allow the development of a new system that can continuously measure surface soil erosion. Continuous soil erosion measurements are required in order to comprehend the erosional processes and propose effective and efficient conservation measures to mitigate surface erosion. Mitigating soil erosion, especially in Mediterranean countries such as Greece, is essential in order to maintain environmental and agricultural sustainability. In this paper, we present the Automated Soil Erosion Monitoring System (ASEMS) that measures surface soil erosion along with other factors that impact erosional process. Specifically, this system measures ground level changes (surface soil erosion), rainfall, air temperature, soil temperature, and soil moisture. Another important innovation is that the data will be collected by remote communication. In addition, stakeholder’s awareness is a key factor to help reduce any environmental problem. The different dissemination activities that were utilized are described. The overall outcomes were the development of a new innovative system that can measure erosion very accurately. These data from the system help study the process of erosion and find the best possible methods to reduce erosion. The dissemination activities enhance the stakeholders and public's awareness on surface soil erosion problems and will lead to the adoption of more effective soil erosion conservation practices in Greece.Keywords: Soil management, climate change, new technologies, conservation practices.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24673545 Separation of Composites for Recycling: Measurement of Electrostatic Charge of Carbon and Glass Fiber Particles
Authors: J. Thirunavukkarasu, M. Poulet, T. Turner, S. Pickering
Abstract:
Composite waste from manufacturing can consist of different fiber materials, including blends of different fiber. Commercially, the recycling of composite waste is currently limited to carbon fiber waste and recycling glass fiber waste is currently not economically viable due to the low cost of virgin glass fiber and the reduced mechanical properties of the recovered fibers. For this reason, the recycling of hybrid fiber materials, where carbon fiber is blended with glass fibers, cannot be processed economically. Therefore, a separation method is required to remove the glass fiber materials during the recycling process. An electrostatic separation method is chosen for this work because of the significant difference between carbon and glass fiber electrical properties. In this study, an experimental rig has been developed to measure the electrostatic charge achievable as the materials are passed through a tube. A range of particle lengths (80-100 µm, 6 mm and 12 mm), surface state conditions (0%SA, 2%SA and 6%SA), and several tube wall materials have been studied. A polytetrafluoroethylene (PTFE) tube and recycled fiber without sizing agent were identified as the most suitable parameters for the electrical separation method. It was also found that shorter fiber lengths helped to encourage particle flow and attain higher charge values. These findings can be used to develop a separation process to enable the cost-effective recycling of hybrid fiber composite waste.
Keywords: electrostatic charging, hybrid fiber composite, recycling, short fiber composites
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6713544 Comparison between Higher-Order SVD and Third-order Orthogonal Tensor Product Expansion
Authors: Chiharu Okuma, Jun Murakami, Naoki Yamamoto
Abstract:
In digital signal processing it is important to approximate multi-dimensional data by the method called rank reduction, in which we reduce the rank of multi-dimensional data from higher to lower. For 2-dimennsional data, singular value decomposition (SVD) is one of the most known rank reduction techniques. Additional, outer product expansion expanded from SVD was proposed and implemented for multi-dimensional data, which has been widely applied to image processing and pattern recognition. However, the multi-dimensional outer product expansion has behavior of great computation complex and has not orthogonally between the expansion terms. Therefore we have proposed an alterative method, Third-order Orthogonal Tensor Product Expansion short for 3-OTPE. 3-OTPE uses the power method instead of nonlinear optimization method for decreasing at computing time. At the same time the group of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is also developed with SVD extensions for multi-dimensional data. 3-OTPE and HOSVD are similarly on the rank reduction of multi-dimensional data. Using these two methods we can obtain computation results respectively, some ones are the same while some ones are slight different. In this paper, we compare 3-OTPE to HOSVD in accuracy of calculation and computing time of resolution, and clarify the difference between these two methods.Keywords: Singular value decomposition (SVD), higher-order SVD (HOSVD), higher-order tensor, outer product expansion, power method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15623543 Intelligent Recognition of Diabetes Disease via FCM Based Attribute Weighting
Authors: Kemal Polat
Abstract:
In this paper, an attribute weighting method called fuzzy C-means clustering based attribute weighting (FCMAW) for classification of Diabetes disease dataset has been used. The aims of this study are to reduce the variance within attributes of diabetes dataset and to improve the classification accuracy of classifier algorithm transforming from non-linear separable datasets to linearly separable datasets. Pima Indians Diabetes dataset has two classes including normal subjects (500 instances) and diabetes subjects (268 instances). Fuzzy C-means clustering is an improved version of K-means clustering method and is one of most used clustering methods in data mining and machine learning applications. In this study, as the first stage, fuzzy C-means clustering process has been used for finding the centers of attributes in Pima Indians diabetes dataset and then weighted the dataset according to the ratios of the means of attributes to centers of theirs. Secondly, after weighting process, the classifier algorithms including support vector machine (SVM) and k-NN (k- nearest neighbor) classifiers have been used for classifying weighted Pima Indians diabetes dataset. Experimental results show that the proposed attribute weighting method (FCMAW) has obtained very promising results in the classification of Pima Indians diabetes dataset.
Keywords: Fuzzy C-means clustering, Fuzzy C-means clustering based attribute weighting, Pima Indians diabetes dataset, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17633542 A Study on the Interlaminar Shear Strength of Carbon Fiber Reinforced Plastics Depending on the Lamination Methods
Authors: Min Sang Lee, Hee Jae Shin, In Pyo Cha, Sun Ho Ko, Hyun Kyung Yoon, Hong Gun Kim, Lee Ku Kwac
Abstract:
The prepreg process among the CFRP (Carbon Fiber Reinforced Plastic) forming methods is the short term of ‘Pre-impregnation’, which is widely used for aerospace composites that require a high quality property such as a fiber-reinforced woven fabric, in which an epoxy hardening resin is impregnated the reality. However, that this process requires continuous researches and developments for its commercialization because the delamination characteristically develops between the layers when a great weight is loaded from outside to supplement such demerit, three lamination methods among the prepreg lamination methods of CFRP were designed to minimize the delamination between the layers due to external impacts. Further, the newly designed methods and the existing lamination methods were analyzed through a mechanical characteristic test, Interlaminar Shear Strength test. The Interlaminar Shear Strength test result confirmed that the newly proposed three lamination methods, i.e. the Roll, Half and Zigzag laminations, presented more excellent strengths compared to the conventional Ply lamination. The interlaminar shear strength in the roll method with relatively dense fiber distribution was approximately 1.75% higher than that in the existing ply lamination method, and in the half method, it was approximately 0.78% higher.
Keywords: Carbon Fiber Reinforced Plastic (CFRP), Pre-Impregnation, Laminating Method, Interlaminar Shear Strength (ILSS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29103541 Prediction of the Lateral Bearing Capacity of Short Piles in Clayey Soils Using Imperialist Competitive Algorithm-Based Artificial Neural Networks
Authors: Reza Dinarvand, Mahdi Sadeghian, Somaye Sadeghian
Abstract:
Prediction of the ultimate bearing capacity of piles (Qu) is one of the basic issues in geotechnical engineering. So far, several methods have been used to estimate Qu, including the recently developed artificial intelligence methods. In recent years, optimization algorithms have been used to minimize artificial network errors, such as colony algorithms, genetic algorithms, imperialist competitive algorithms, and so on. In the present research, artificial neural networks based on colonial competition algorithm (ANN-ICA) were used, and their results were compared with other methods. The results of laboratory tests of short piles in clayey soils with parameters such as pile diameter, pile buried length, eccentricity of load and undrained shear resistance of soil were used for modeling and evaluation. The results showed that ICA-based artificial neural networks predicted lateral bearing capacity of short piles with a correlation coefficient of 0.9865 for training data and 0.975 for test data. Furthermore, the results of the model indicated the superiority of ICA-based artificial neural networks compared to back-propagation artificial neural networks as well as the Broms and Hansen methods.
Keywords: Lateral bearing capacity, short pile, clayey soil, artificial neural network, Imperialist competition algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9423540 Chose the Right Mutation Rate for Better Evolve Combinational Logic Circuits
Authors: Emanuele Stomeo, Tatiana Kalganova, Cyrille Lambert
Abstract:
Evolvable hardware (EHW) is a developing field that applies evolutionary algorithm (EA) to automatically design circuits, antennas, robot controllers etc. A lot of research has been done in this area and several different EAs have been introduced to tackle numerous problems, as scalability, evolvability etc. However every time a specific EA is chosen for solving a particular task, all its components, such as population size, initialization, selection mechanism, mutation rate, and genetic operators, should be selected in order to achieve the best results. In the last three decade the selection of the right parameters for the EA-s components for solving different “test-problems" has been investigated. In this paper the behaviour of mutation rate for designing logic circuits, which has not been done before, has been deeply analyzed. The mutation rate for an EHW system modifies the number of inputs of each logic gates, the functionality (for example from AND to NOR) and the connectivity between logic gates. The behaviour of the mutation has been analyzed based on the number of generations, genotype redundancy and number of logic gates for the evolved circuits. The experimental results found provide the behaviour of the mutation rate during evolution for the design and optimization of simple logic circuits. The experimental results propose the best mutation rate to be used for designing combinational logic circuits. The research presented is particular important for those who would like to implement a dynamic mutation rate inside the evolutionary algorithm for evolving digital circuits. The researches on the mutation rate during the last 40 years are also summarized.Keywords: Design of logic circuit, evolutionary computation, evolvable hardware, mutation rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16933539 Manipulation of Ideological Items in the Audiovisual Translation of Voiced-Over Documentaries in the Arab World
Authors: S. Chabbak
Abstract:
In a widely globalized world, the influence of audiovisual translation on the culture and identity of audiences is unmistakable. However, in the Arab World, there is a noticeable disproportion between this growing influence and the research carried out in the field. As a matter of fact, the voiced-over documentary is one of the most abundantly translated genres in the Arab World that carries lots of ideological elements which are in many cases rendered by manipulation. However, voiced-over documentaries have hardly received any focused attention from researchers in the Arab World. This paper attempts to scrutinize the process of translation of voiced-over documentaries in the Arab World, from French into Arabic in the present case study, by sub-categorizing the ideological items subject to manipulation, identifying the techniques utilized in their translation and exploring the potential extra-linguistic factors that prompt translation agents to opt for manipulative translation. The investigation is based on a corpus of 94 episodes taken from a series entitled 360° GEO Reports, produced by the French German network ARTE in French, and acquired, translated and aired by Al Jazeera Documentary Channel for Arab audiences. The results yielded 124 cases of manipulation in four sub-categories of ideological items, and the use of 10 different oblique procedures in the process of manipulative translation. The study also revealed that manipulation is in most of the instances dictated by the editorial line of the broadcasting channel, in addition to the religious, geopolitical and socio-cultural peculiarities of the target culture.
Keywords: Audiovisual translation, ideological items, manipulation, voiced-over documentaries.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10483538 An Optimal Load Shedding Approach for Distribution Networks with DGs considering Capacity Deficiency Modelling of Bulked Power Supply
Authors: A. R. Malekpour, A.R. Seifi
Abstract:
This paper discusses a genetic algorithm (GA) based optimal load shedding that can apply for electrical distribution networks with and without dispersed generators (DG). Also, the proposed method has the ability for considering constant and variable capacity deficiency caused by unscheduled outages in the bulked generation and transmission system of bulked power supply. The genetic algorithm (GA) is employed to search for the optimal load shedding strategy in distribution networks considering DGs in two cases of constant and variable modelling of bulked power supply of distribution networks. Electrical power distribution systems have a radial network and unidirectional power flows. With the advent of dispersed generations, the electrical distribution system has a locally looped network and bidirectional power flows. Therefore, installed DG in the electrical distribution systems can cause operational problems and impact on existing operational schemes. Introduction of DGs in electrical distribution systems has introduced many new issues in operational and planning level. Load shedding as one of operational issue has no exempt. The objective is to minimize the sum of curtailed load and also system losses within the frame-work of system operational and security constraints. The proposed method is tested on a radial distribution system with 33 load points for more practical applications.
Keywords: DG, Load shedding, Optimization, Capacity Deficiency Modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17393537 A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments
Authors: Mohammadhossein Moghimi, Robert Stone, Pia Rotshtein
Abstract:
Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.
Keywords: Virtual Reality, effective computing, effective VR, emotion-based effective physiological database.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9943536 Vehicle Routing Problem with Mixed Fleet of Conventional and Heterogenous Electric Vehicles and Time Dependent Charging Costs
Authors: Ons Sassi, Wahiba Ramdane Cherif-Khettaf, Ammar Oulamara
Abstract:
In this paper, we consider the vehicle routing problem with mixed fleet of conventional and heterogenous electric vehicles and time dependent charging costs, denoted VRP-HFCC, in which a set of geographically scattered customers have to be served by a mixed fleet of vehicles composed of a heterogenous fleet of Electric Vehicles (EVs), having different battery capacities and operating costs, and Conventional Vehicles (CVs). We include the possibility of charging EVs in the available charging stations during the routes in order to serve all customers. Each charging station offers charging service with a known technology of chargers and time dependent charging costs. Charging stations are also subject to operating time windows constraints. EVs are not necessarily compatible with all available charging technologies and a partial charging is allowed. Intermittent charging at the depot is also allowed provided that constraints related to the electricity grid are satisfied. The objective is to minimize the number of employed vehicles and then minimize the total travel and charging costs. In this study, we present a Mixed Integer Programming Model and develop a Charging Routing Heuristic and a Local Search Heuristic based on the Inject-Eject routine with different insertion methods. All heuristics are tested on real data instances.
Keywords: charging problem, electric vehicle, heuristics, local search, optimization, routing problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26743535 Valuation of Green Commercial Office Building: A Preliminary Study of Malaysian Valuers’ Insight
Authors: Tuti Haryati Jasimin, Hishamuddin Mohd Ali
Abstract:
Malaysia’s green building development is gaining momentum and green buildings have become a key focus area, especially within the commercial sector with the encouragement of government legislation and policy. Due to the emerging awareness among the market players’ views of the benefits associated with the ownership of green buildings in Malaysia, there is a need for valuers to incorporate consideration of sustainability into their assessments of property market value to ensure the green buildings continue to increase in the market. This paper analyses the valuers’ current perception on the valuation practices with regard to the green issues in Malaysia. The study was based on a survey of registered real estate valuers and the experts whose work related to valuation in the Klang Valley area to rate their view regarding the perception on valuation of green building. The findings present evidence that even though Malaysian valuers have limited knowledge of green buildings, they recognise the importance of incorporating the green features in the valuation process. The inclusion of incorporating the green features in valuations in practice was hindered by the inadequacy of sufficient transaction data in the market. Furthermore, valuers experienced difficulty in identifying what are the various input parameters of green building and how to adjust it in order to reflect the benefit of sustainability features correctly in the valuation process. This paper focuses on the present challenges confronted by Malaysian valuers with regards to incorporating the green features in their valuation.Keywords: Green commercial office building, Malaysia, valuers’ perception, valuation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29763534 Comparative Performance of Artificial Bee Colony Based Algorithms for Wind-Thermal Unit Commitment
Authors: P. K. Singhal, R. Naresh, V. Sharma
Abstract:
This paper presents the three optimization models, namely New Binary Artificial Bee Colony (NBABC) algorithm, NBABC with Local Search (NBABC-LS), and NBABC with Genetic Crossover (NBABC-GC) for solving the Wind-Thermal Unit Commitment (WTUC) problem. The uncertain nature of the wind power is incorporated using the Weibull probability density function, which is used to calculate the overestimation and underestimation costs associated with the wind power fluctuation. The NBABC algorithm utilizes a mechanism based on the dissimilarity measure between binary strings for generating the binary solutions in WTUC problem. In NBABC algorithm, an intelligent scout bee phase is proposed that replaces the abandoned solution with the global best solution. The local search operator exploits the neighboring region of the current solutions, whereas the integration of genetic crossover with the NBABC algorithm increases the diversity in the search space and thus avoids the problem of local trappings encountered with the NBABC algorithm. These models are then used to decide the units on/off status, whereas the lambda iteration method is used to dispatch the hourly load demand among the committed units. The effectiveness of the proposed models is validated on an IEEE 10-unit thermal system combined with a wind farm over the planning period of 24 hours.Keywords: Artificial bee colony algorithm, economic dispatch, unit commitment, wind power.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10763533 Comparative Performance of Artificial Bee Colony Based Algorithms for Wind-Thermal Unit Commitment
Authors: P. K. Singhal, R. Naresh, V. Sharma
Abstract:
This paper presents the three optimization models, namely New Binary Artificial Bee Colony (NBABC) algorithm, NBABC with Local Search (NBABC-LS), and NBABC with Genetic Crossover (NBABC-GC) for solving the Wind-Thermal Unit Commitment (WTUC) problem. The uncertain nature of the wind power is incorporated using the Weibull probability density function, which is used to calculate the overestimation and underestimation costs associated with the wind power fluctuation. The NBABC algorithm utilizes a mechanism based on the dissimilarity measure between binary strings for generating the binary solutions in WTUC problem. In NBABC algorithm, an intelligent scout bee phase is proposed that replaces the abandoned solution with the global best solution. The local search operator exploits the neighboring region of the current solutions, whereas the integration of genetic crossover with the NBABC algorithm increases the diversity in the search space and thus avoids the problem of local trappings encountered with the NBABC algorithm. These models are then used to decide the units on/off status, whereas the lambda iteration method is used to dispatch the hourly load demand among the committed units. The effectiveness of the proposed models is validated on an IEEE 10-unit thermal system combined with a wind farm over the planning period of 24 hours.Keywords: Artificial bee colony algorithm, economic dispatch, unit commitment, wind power.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11823532 Increasing the Resilience of Cyber Physical Systems in Smart Grid Environments using Dynamic Cells
Authors: Andrea Tundis, Carlos García Cordero, Rolf Egert, Alfredo Garro, Max Mühlhäuser
Abstract:
Resilience is an important system property that relies on the ability of a system to automatically recover from a degraded state so as to continue providing its services. Resilient systems have the means of detecting faults and failures with the added capability of automatically restoring their normal operations. Mastering resilience in the domain of Cyber-Physical Systems is challenging due to the interdependence of hybrid hardware and software components, along with physical limitations, laws, regulations and standards, among others. In order to overcome these challenges, this paper presents a modeling approach, based on the concept of Dynamic Cells, tailored to the management of Smart Grids. Additionally, a heuristic algorithm that works on top of the proposed modeling approach, to find resilient configurations, has been defined and implemented. More specifically, the model supports a flexible representation of Smart Grids and the algorithm is able to manage, at different abstraction levels, the resource consumption of individual grid elements on the presence of failures and faults. Finally, the proposal is evaluated in a test scenario where the effectiveness of such approach, when dealing with complex scenarios where adequate solutions are difficult to find, is shown.Keywords: Cyber-physical systems, energy management, optimization, smart grids, self-healing, resilience, security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10693531 Optimal Simultaneous Sizing and Siting of DGs and Smart Meters Considering Voltage Profile Improvement in Active Distribution Networks
Authors: T. Sattarpour, D. Nazarpour
Abstract:
This paper investigates the effect of simultaneous placement of DGs and smart meters (SMs), on voltage profile improvement in active distribution networks (ADNs). A substantial center of attention has recently been on responsive loads initiated in power system problem studies such as distributed generations (DGs). Existence of responsive loads in active distribution networks (ADNs) would have undeniable effect on sizing and siting of DGs. For this reason, an optimal framework is proposed for sizing and siting of DGs and SMs in ADNs. SMs are taken into consideration for the sake of successful implementing of demand response programs (DRPs) such as direct load control (DLC) with end-side consumers. Looking for voltage profile improvement, the optimization procedure is solved by genetic algorithm (GA) and tested on IEEE 33-bus distribution test system. Different scenarios with variations in the number of DG units, individual or simultaneous placing of DGs and SMs, and adaptive power factor (APF) mode for DGs to support reactive power have been established. The obtained results confirm the significant effect of DRPs and APF mode in determining the optimal size and site of DGs to be connected in ADN resulting to the improvement of voltage profile as well.
Keywords: Active distribution network (ADN), distributed generations (DGs), smart meters (SMs), demand response programs (DRPs), adaptive power factor (APF).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17703530 Collaborative and Experimental Cultures in Virtual Reality Journalism: From the Perspective of Content Creators
Authors: Radwa Mabrook
Abstract:
Virtual Reality (VR) content creation is a complex and an expensive process, which requires multi-disciplinary teams of content creators. Grant schemes from technology companies help media organisations to explore the VR potential in journalism and factual storytelling. Media organisations try to do as much as they can in-house, but they may outsource due to time constraints and skill availability. Journalists, game developers, sound designers and creative artists work together and bring in new cultures of work. This study explores the collaborative experimental nature of VR content creation, through tracing every actor involved in the process and examining their perceptions of the VR work. The study builds on Actor Network Theory (ANT), which decomposes phenomena into their basic elements and traces the interrelations among them. Therefore, the researcher conducted 22 semi-structured interviews with VR content creators between November 2017 and April 2018. Purposive and snowball sampling techniques allowed the researcher to recruit fact-based VR content creators from production studios and media organisations, as well as freelancers. Interviews lasted up to three hours, and they were a mix of Skype calls and in-person interviews. Participants consented for their interviews to be recorded, and for their names to be revealed in the study. The researcher coded interviews’ transcripts in Nvivo software, looking for key themes that correspond with the research questions. The study revealed that VR content creators must be adaptive to change, open to learn and comfortable with mistakes. The VR content creation process is very iterative because VR has no established work flow or visual grammar. Multi-disciplinary VR team members often speak different languages making it hard to communicate. However, adaptive content creators perceive VR work as a fun experience and an opportunity to learn. The traditional sense of competition and the strive for information exclusivity are now replaced by a strong drive for knowledge sharing. VR content creators are open to share their methods of work and their experiences. They target to build a collaborative network that aims to harness VR technology for journalism and factual storytelling. Indeed, VR is instilling collaborative and experimental cultures in journalism.
Keywords: Collaborative culture, content creation, experimental culture, virtual reality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 788