Search results for: Time varying frequency.
4140 The Role of People and Data in Complex Spatial-Related Long-Term Decisions: A Case Study of Capital Project Management Groups
Authors: Peter Boyes, Sarah Sharples, Paul Tennent, Gary Priestnall, Jeremy Morley
Abstract:
Significant long-term investment projects can involve complex decisions. These are often described as capital projects and the factors that contribute to their complexity include budgets, motivating reasons for investment, stakeholder involvement, interdependent projects, and the delivery phases required. The complexity of these projects often requires management groups to be established involving stakeholder representatives, these teams are inherently multidisciplinary. This study uses two university campus capital projects as case studies for this type of management group. Due to the interaction of projects with wider campus infrastructure and users, decisions are made at varying spatial granularity throughout the project lifespan. This spatial-related context brings complexity to the group decisions. Sensemaking is the process used to achieve group situational awareness of a complex situation, enabling the team to arrive at a consensus and make a decision. The purpose of this study is to understand the role of people and data in complex spatial related long-term decision and sensemaking processes. The paper aims to identify and present issues experienced in practical settings of these types of decision. A series of exploratory semi-structured interviews with members of the two projects elicit an understanding of their operation. From two stages of thematic analysis, inductive and deductive, emergent themes are identified around the group structure, the data usage, and the decision making within these groups. When data were made available to the group, there were commonly issues with perception of veracity and validity of the data presented; this impacted the ability of the group to reach consensus and therefore for decision to be made. Similarly, there were different responses to forecasted or modelled data, shaped by the experience and occupation of the individuals within the multidisciplinary management group. This paper provides an understanding of further support required for team sensemaking and decision making in complex capital projects. The paper also discusses the barriers found to effective decision making in this setting and suggests opportunities to develop decision support systems in this team strategic decision-making process. Recommendations are made for further research into the sensemaking and decision-making process of this complex spatial-related setting.
Keywords: decision making, decisions under uncertainty, real decisions, sensemaking, spatial, team decision making
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5014139 An Agent Oriented Approach to Operational Profile Management
Authors: Sunitha Ramanujam, Hany El Yamany, Miriam A. M. Capretz
Abstract:
Software reliability, defined as the probability of a software system or application functioning without failure or errors over a defined period of time, has been an important area of research for over three decades. Several research efforts aimed at developing models to improve reliability are currently underway. One of the most popular approaches to software reliability adopted by some of these research efforts involves the use of operational profiles to predict how software applications will be used. Operational profiles are a quantification of usage patterns for a software application. The research presented in this paper investigates an innovative multiagent framework for automatic creation and management of operational profiles for generic distributed systems after their release into the market. The architecture of the proposed Operational Profile MAS (Multi-Agent System) is presented along with detailed descriptions of the various models arrived at following the analysis and design phases of the proposed system. The operational profile in this paper is extended to comprise seven different profiles. Further, the criticality of operations is defined using a new composed metrics in order to organize the testing process as well as to decrease the time and cost involved in this process. A prototype implementation of the proposed MAS is included as proof-of-concept and the framework is considered as a step towards making distributed systems intelligent and self-managing.Keywords: Software reliability, Software testing, Metrics, Distributed systems, Multi-agent systems
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18654138 General Regression Neural Network and Back Propagation Neural Network Modeling for Predicting Radial Overcut in EDM: A Comparative Study
Authors: Raja Das, M. K. Pradhan
Abstract:
This paper presents a comparative study between two neural network models namely General Regression Neural Network (GRNN) and Back Propagation Neural Network (BPNN) are used to estimate radial overcut produced during Electrical Discharge Machining (EDM). Four input parameters have been employed: discharge current (Ip), pulse on time (Ton), Duty fraction (Tau) and discharge voltage (V). Recently, artificial intelligence techniques, as it is emerged as an effective tool that could be used to replace time consuming procedures in various scientific or engineering applications, explicitly in prediction and estimation of the complex and nonlinear process. The both networks are trained, and the prediction results are tested with the unseen validation set of the experiment and analysed. It is found that the performance of both the networks are found to be in good agreement with average percentage error less than 11% and the correlation coefficient obtained for the validation data set for GRNN and BPNN is more than 91%. However, it is much faster to train GRNN network than a BPNN and GRNN is often more accurate than BPNN. GRNN requires more memory space to store the model, GRNN features fast learning that does not require an iterative procedure, and highly parallel structure. GRNN networks are slower than multilayer perceptron networks at classifying new cases.
Keywords: Electrical-discharge machining, General Regression Neural Network, Back-propagation Neural Network, Radial Overcut.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31194137 EEG Correlates of Trait and Mathematical Anxiety during Lexical and Numerical Error-Recognition Tasks
Authors: Alexander N. Savostyanov, Tatiana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Tatiana A. Golovko, Yulia V. Kovas
Abstract:
EEG correlates of mathematical and trait anxiety level were studied in 52 healthy Russian-speakers during execution of error-recognition tasks with lexical, arithmetic and algebraic conditions. Event-related spectral perturbations were used as a measure of brain activity. The ERSP plots revealed alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three conditions. The correlates of anxiety were found in theta (4-8 Hz) and beta2 (16- 20 Hz) frequency bands. In theta band the effects of mathematical anxiety were stronger expressed in lexical, than in arithmetic and algebraic condition. The mathematical anxiety effects in theta band were associated with differences between anterior and posterior cortical areas, whereas the effects of trait anxiety were associated with inter-hemispherical differences. In beta1 and beta2 bands effects of trait and mathematical anxiety were directed oppositely. The trait anxiety was associated with increase of amplitude of desynchronization, whereas the mathematical anxiety was associated with decrease of this amplitude. The effect of mathematical anxiety in beta2 band was insignificant for lexical condition but was the strongest in algebraic condition. EEG correlates of anxiety in theta band could be interpreted as indexes of task emotionality, whereas the reaction in beta2 band is related to tension of intellectual resources.Keywords: EEG, brain activity, lexical and numerical error-recognition tasks, mathematical and trait anxiety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19404136 Reduction Conditions of Briquetted Solid Wastes Generated by the Integrated Iron and Steel Plant
Authors: Gökhan Polat, Dicle Kocaoğlu Yılmazer, Muhlis Nezihi Sarıdede
Abstract:
Iron oxides are the main input to produce iron in integrated iron and steel plants. During production of iron from iron oxides, some wastes with high iron content occur. These main wastes can be classified as basic oxygen furnace (BOF) sludge, flue dust and rolling scale. Recycling of these wastes has a great importance for both environmental effects and reduction of production costs. In this study, recycling experiments were performed on basic oxygen furnace sludge, flue dust and rolling scale which contain 53.8%, 54.3% and 70.2% iron respectively. These wastes were mixed together with coke as reducer and these mixtures are pressed to obtain cylindrical briquettes. These briquettes were pressed under various compacting forces from 1 ton to 6 tons. Also, both stoichiometric and twice the stoichiometric cokes were added to investigate effect of coke amount on reduction properties of the waste mixtures. Then, these briquettes were reduced at 1000°C and 1100°C during 30, 60, 90, 120 and 150 min in a muffle furnace. According to the results of reduction experiments, the effect of compacting force, temperature and time on reduction ratio of the wastes were determined. It is found that 1 ton compacting force, 150 min reduction time and 1100°C are the optimum conditions to obtain reduction ratio higher than 75%.
Keywords: Iron oxide wastes, reduction, coke, recycling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13244135 When Psychology Meets Ecology: Cognitive Flexibility for Quarry Rehabilitation
Authors: J. Fenianos, C. Khater, D. Brouillet
Abstract:
Ecological projects are often faced with reluctance from local communities hosting the project, especially when this project involves variation from preset ideas or classical practices. This paper aims at appreciating the contribution of environmental psychology through cognitive flexibility exercises to improve the acceptability of local communities in adopting more ecological rehabilitation scenarios. The study is based on a quarry site located in Bekaa- Lebanon. Four groups were considered with different levels of involvement, as follows: Group 1 is Training (T) – 50 hours of on-site training over 8 months, Group 2 is Awareness (A) – 2 hours of awareness raising session, Group 3 is Flexibility (F) – 2 hours of flexibility exercises and Group 4 is the Control (C). The results show that individuals in Group 3 (F) who followed flexibility sessions accept comparably the ecological rehabilitation option over the more classical one. This is also the case for the people in Group 1 (T) who followed a more time-demanding “on-site training”. Another experience was conducted on a second quarry site combining flexibility with awareness-raising. This research confirms that it is possible to reduce resistance to change thanks to a limited in-time intervention using cognitive flexibility. This methodological approach could be transferable to other environmental problems involving local communities and changes in preset perceptions.
Keywords: Acceptability, ecological restoration, environmental psychology, Lebanon, local communities, resistance to change.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12944134 Experimental Study on a Solar Heat Concentrating Steam Generator
Authors: Qiangqiang Xu, Xu Ji, Jingyang Han, Changchun Yang, Ming Li
Abstract:
Replacing of complex solar concentrating unit, this paper designs a solar heat-concentrating medium-temperature steam-generating system. Solar radiation is collected by using a large solar collecting and heat concentrating plate and is converged to the metal evaporating pipe with high efficient heat transfer. In the meantime, the heat loss is reduced by employing a double-glazed cover and other heat insulating structures. Thus, a high temperature is reached in the metal evaporating pipe. The influences of the system's structure parameters on system performance are analyzed. The steam production rate and the steam production under different solar irradiance, solar collecting and heat concentrating plate area, solar collecting and heat concentrating plate temperature and heat loss are obtained. The results show that when solar irradiance is higher than 600 W/m2, the effective heat collecting area is 7.6 m2 and the double-glazing cover is adopted, the system heat loss amount is lower than the solar irradiance value. The stable steam is produced in the metal evaporating pipe at 100 ℃, 110 ℃, and 120 ℃, respectively. When the average solar irradiance is about 896 W/m2, and the steaming cumulative time is about 5 hours, the daily steam production of the system is about 6.174 kg. In a single day, the solar irradiance is larger at noon, thus the steam production rate is large at that time. Before 9:00 and after 16:00, the solar irradiance is smaller, and the steam production rate is almost 0.
Keywords: Heat concentrating, heat loss, medium temperature, solar steam production.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11114133 Performance Comparison of Situation-Aware Models for Activating Robot Vacuum Cleaner in a Smart Home
Authors: Seongcheol Kwon, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
We assume an IoT-based smart-home environment where the on-off status of each of the electrical appliances including the room lights can be recognized in a real time by monitoring and analyzing the smart meter data. At any moment in such an environment, we can recognize what the household or the user is doing by referring to the status data of the appliances. In this paper, we focus on a smart-home service that is to activate a robot vacuum cleaner at right time by recognizing the user situation, which requires a situation-aware model that can distinguish the situations that allow vacuum cleaning (Yes) from those that do not (No). We learn as our candidate models a few classifiers such as naïve Bayes, decision tree, and logistic regression that can map the appliance-status data into Yes and No situations. Our training and test data are obtained from simulations of user behaviors, in which a sequence of user situations such as cooking, eating, dish washing, and so on is generated with the status of the relevant appliances changed in accordance with the situation changes. During the simulation, both the situation transition and the resulting appliance status are determined stochastically. To compare the performances of the aforementioned classifiers we obtain their learning curves for different types of users through simulations. The result of our empirical study reveals that naïve Bayes achieves a slightly better classification accuracy than the other compared classifiers.Keywords: Situation-awareness, Smart home, IoT, Machine learning, Classifier.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18654132 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery
Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene
Abstract:
Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.
Keywords: Multi-objective decision support, analysis, data validation, freight delivery, multi-modal transportation, genetic programming methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4974131 Design Charts for Strip Footing on Untreated and Cement Treated Sand Mat over Underlying Natural Soft Clay
Authors: Sharifullah Ahmed, Sarwar Jahan Md. Yasin
Abstract:
Shallow foundations on unimproved soft natural soils can undergo a high consolidation and secondary settlement. For low and medium rise building projects on such soil condition, pile foundation may not be cost effective. In such cases an alternative to pile foundations may be shallow strip footings placed on a double layered improved soil system soil. The upper layer of this system is untreated or cement treated compacted sand and underlying layer is natural soft clay. This system will reduce the settlement to an allowable limit. The current research has been conducted with the settlement of a rigid plane-strain strip footing of 2.5 m width placed on the surface of a soil consisting of an untreated or cement treated sand layer overlying a bed of homogeneous soft clay. The settlement of the mentioned shallow foundation has been studied considering both cases with the thicknesses of the sand layer are 0.3 to 0.9 times the width of footing. The response of the clay layer is assumed as undrained for plastic loading stages and drained during consolidation stages. The response of the sand layer is drained during all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0. A natural clay deposit of 15 m thickness and 18 m width has been modeled using Hardening Soil Model, Soft Soil Model, Soft Soil Creep Model, and upper improvement layer has been modeled using only Hardening Soil Model. The groundwater level is at the top level of the clay deposit that made the system fully saturated. Parametric study has been conducted to determine the effect of thickness, density, cementation of the sand mat and density, shear strength of the soft clay layer on the settlement of strip foundation under the uniformly distributed vertical load of varying value. A set of the chart has been established for designing shallow strip footing on the sand mat over thick, soft clay deposit through obtaining the particular thickness of sand mat for particular subsoil parameter to ensure no punching shear failure and no settlement beyond allowable level. Design guideline in the form of non-dimensional charts has been developed for footing pressure equivalent to medium-rise residential or commercial building foundation with strip footing on soft inorganic Normally Consolidated (NC) soil of Bangladesh having void ratio from 1.0 to 1.45.
Keywords: Design charts, ground improvement, PLAXIS 2D, primary and secondary settlement, sand Mat, soft clay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6824130 Compressive Strength and Workability Characteristics of Low-Calcium Fly ash-based Self-Compacting Geopolymer Concrete
Authors: M. Fareed Ahmed, M. Fadhil Nuruddin, Nasir Shafiq
Abstract:
Due to growing environmental concerns of the cement industry, alternative cement technologies have become an area of increasing interest. It is now believed that new binders are indispensable for enhanced environmental and durability performance. Self-compacting Geopolymer concrete is an innovative method and improved way of concreting operation that does not require vibration for placing it and is produced by complete elimination of ordinary Portland cement. This paper documents the assessment of the compressive strength and workability characteristics of low-calcium fly ash based selfcompacting geopolymer concrete. The essential workability properties of the freshly prepared Self-compacting Geopolymer concrete such as filling ability, passing ability and segregation resistance were evaluated by using Slump flow, V-funnel, L-box and J-ring test methods. The fundamental requirements of high flowability and segregation resistance as specified by guidelines on Self Compacting Concrete by EFNARC were satisfied. In addition, compressive strength was determined and the test results are included here. This paper also reports the effect of extra water, curing time and curing temperature on the compressive strength of self-compacting geopolymer concrete. The test results show that extra water in the concrete mix plays a significant role. Also, longer curing time and curing the concrete specimens at higher temperatures will result in higher compressive strength.Keywords: Fly ash, Geopolymer Concrete, Self-compactingconcrete, Self-compacting Geopolymer concrete
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45944129 Measurement of Real Time Drive Cycle for Indian Roads and Estimation of Component Sizing for HEV using LABVIEW
Authors: Varsha Shah, Patel Pritesh, Patel Sagar, PrasantaKundu, RanjanMaheshwari
Abstract:
Performance of vehicle depends on driving patterns and vehicle drive train configuration. Driving patterns depends on traffic condition, road condition and driver behavior. HEV design is carried out under certain constrain like vehicle operating range, acceleration, decelerations, maximum speed and road grades which are directly related to the driving patterns. Therefore the detailed study on HEV performance over a different drive cycle is required for selection and sizing of HEV components. A simple hardware is design to measured velocity v/s time profile of the vehicle by operating vehicle on Indian roads under real traffic conditions. To size the HEV components, a detailed dynamic model of the vehicle is developed considering the effect of inertia of rotating components like wheels, drive chain, engine and electric motor. Using vehicle model and different Indian drive cycles data, total tractive power demanded by vehicle and power supplied by individual components has been calculated.Using above information selection and estimation of component sizing for HEV is carried out so that HEV performs efficiently under hostile driving condition. Complete analysis is carried out in LABVIEW.Keywords: BLDC motor, Driving cycle, LABVIEW Ultracapacitors, Vehicle Dynamics,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39104128 A Study on Bilingual Semantic Processing: Category Effects and Age Effects
Authors: Lai Yi-Hsiu
Abstract:
The present study addressed the nature of bilingual semantic processing in Mandarin Chinese and Southern Min and examined category effects and age effects. Nineteen bilingual adults of Mandarin Chinese and Southern Min, nine monolingual seniors of Mandarin Chinese, and ten monolingual seniors of Southern Min in Taiwan individually completed two semantic tasks: Picture naming and category fluency tasks. The instruments for the naming task were sixty black-and-white pictures, including thirty-five object pictures and twenty-five action pictures. The category fluency task also consisted of two semantic categories – objects (or nouns) and actions (or verbs). The reaction time for each picture/question was additionally calculated and analyzed. Oral productions in Mandarin Chinese and in Southern Min were compared and discussed to examine the category effects and age effects. The results of the category fluency task indicated that the content of information of these seniors was comparatively deteriorated, and thus they produced a smaller number of semantic-lexical items. Significant group differences were also found in the reaction time results. Category effects were significant for both adults and seniors in the semantic fluency task. The findings of the present study will help characterize the nature of the bilingual semantic processing of adults and seniors, and contribute to the fields of contrastive and corpus linguistics.
Keywords: Bilingual semantic processing, aging, Mandarin Chinese, Southern Min.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12994127 Biochemical Characteristics of Sorghum Flour Fermented and/or Supplemented with Chickpea Flour
Authors: Omima E. Fadlallah, Abdullahi H. El Tinay, Elfadil E. Babiker
Abstract:
Sorghum flour was supplemented with 15 and 30% chickpea flour. Sorghum flour and the supplement were fermented at 35 oC for 0, 8, 16, and 24 h. Changes in pH, titrable acidity, total soluble solids, protein content, in vitro protein digestibility and amino acid composition were investigated during fermentation and/or after supplementation of sorghum flour with chickpea. The pH of the fermenting material decreased sharply with a concomitant increase in the titrable acidity. The total soluble solids remained unchanged with progressive fermentation time. The protein content of sorghum cultivar was found to be 9.27 and that of chickpea was 22.47%. The protein content of sorghum cultivar after supplementation with15 and 30% chickpea was significantly (P ≤ 0.05) increased to 11.78 and 14.55%, respectively. The protein digestibility also increased after fermentation from 13.35 to 30.59 and 40.56% for the supplements, respectively. Further increment in protein content and digestibility was observed when supplemented and unsupplemented samples were fermented for different periods of time. Cooking of fermented samples was found to increase the protein content slightly and decreased digestibility for both supplements. Amino acid content of fermented and fermented and cooked supplements was determined. Supplementation was found to increase the lysine and therionine content. Cooking following fermentation decreased lysine, isoleucine, valine and sulfur containg amino acids.Keywords: Amino acid, Chickpea, Cooking, Fermentation, protein, Sorghum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26274126 Optimization of Two Quality Characteristics in Injection Molding Processes via Taguchi Methodology
Authors: Joseph C. Chen, Venkata Karthik Jakka
Abstract:
The main objective of this research is to optimize tensile strength and dimensional accuracy in injection molding processes using Taguchi Parameter Design. An L16 orthogonal array (OA) is used in Taguchi experimental design with five control factors at four levels each and with non-controllable factor vibration. A total of 32 experiments were designed to obtain the optimal parameter setting for the process. The optimal parameters identified for the shrinkage are shot volume, 1.7 cubic inch (A4); mold term temperature, 130 ºF (B1); hold pressure, 3200 Psi (C4); injection speed, 0.61 inch3/sec (D2); and hold time of 14 seconds (E2). The optimal parameters identified for the tensile strength are shot volume, 1.7 cubic inch (A4); mold temperature, 160 ºF (B4); hold pressure, 3100 Psi (C3); injection speed, 0.69 inch3/sec (D4); and hold time of 14 seconds (E2). The Taguchi-based optimization framework was systematically and successfully implemented to obtain an adjusted optimal setting in this research. The mean shrinkage of the confirmation runs is 0.0031%, and the tensile strength value was found to be 3148.1 psi. Both outcomes are far better results from the baseline, and defects have been further reduced in injection molding processes.
Keywords: Injection molding processes, Taguchi Parameter Design, tensile strength, shrinkage test, high-density polyethylene, HDPE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8434125 Design and Analysis of a Piezoelectric Linear Motor Based on Rigid Clamping
Authors: Chao Yi, Cunyue Lu, Lingwei Quan
Abstract:
Piezoelectric linear motors have the characteristics of great electromagnetic compatibility, high positioning accuracy, compact structure and no deceleration mechanism, which make it promising to applicate in micro-miniature precision drive systems. However, most piezoelectric motors are employed by flexible clamping, which has insufficient rigidity and is difficult to use in rapid positioning. Another problem is that this clamping method seriously affects the vibration efficiency of the vibrating unit. In order to solve these problems, this paper proposes a piezoelectric stack linear motor based on double-end rigid clamping. First, a piezoelectric linear motor with a length of only 35.5 mm is designed. This motor is mainly composed of a motor stator, a driving foot, a ceramic friction strip, a linear guide, a pre-tightening mechanism and a base. This structure is much simpler and smaller than most similar motors, and it is easy to assemble as well as to realize precise control. In addition, the properties of piezoelectric stack are reviewed and in order to obtain the elliptic motion trajectory of the driving head, a driving scheme of the longitudinal-shear composite stack is innovatively proposed. Finally, impedance analysis and speed performance testing were performed on the piezoelectric linear motor prototype. The motor can measure speed up to 25.5 mm/s under the excitation of signal voltage of 120 V and frequency of 390 Hz. The result shows that the proposed piezoelectric stacked linear motor obtains great performance. It can run smoothly in a large speed range, which is suitable for various precision control in medical images, aerospace, precision machinery and many other fields.
Keywords: Elliptical trajectory, linear motor, piezoelectric stack, rigid clamping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7304124 Construction of Attitude Reference Benchmark for Test of Star Sensor Based on Precise Timing
Authors: Tingting Lu, Yonghai Wang, Haiyong Wang, Jiaqi Liu
Abstract:
To satisfy the need of outfield tests of star sensors, a method is put forward to construct the reference attitude benchmark. Firstly, its basic principle is introduced; Then, all the separate conversion matrixes are deduced, which include: the conversion matrix responsible for the transformation from the Earth Centered Inertial frame i to the Earth-centered Earth-fixed frame w according to the time of an atomic clock, the conversion matrix from frame w to the geographic frame t, and the matrix from frame t to the platform frame p, so the attitude matrix of the benchmark platform relative to the frame i can be obtained using all the three matrixes as the multiplicative factors; Next, the attitude matrix of the star sensor relative to frame i is got when the mounting matrix from frame p to the star sensor frame s is calibrated, and the reference attitude angles for star sensor outfield tests can be calculated from the transformation from frame i to frame s; Finally, the computer program is finished to solve the reference attitudes, and the error curves are drawn about the three axis attitude angles whose absolute maximum error is just 0.25ÔÇ│. The analysis on each loop and the final simulating results manifest that the method by precise timing to acquire the absolute reference attitude is feasible for star sensor outfield tests.Keywords: Atomic time, attitude determination, coordinate conversion, inertial coordinate system, star sensor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12084123 Applied Actuator Fault Accommodation in Flight Control Systems Using Fault Reconstruction Based FDD and SMC Reconfiguration
Authors: A. Ghodbane, M. Saad, J.-F. Boland, C. Thibeault
Abstract:
Historically, actuators’ redundancy was used to deal with faults occurring suddenly in flight systems. This technique was generally expensive, time consuming and involves increased weight and space in the system. Therefore, nowadays, the on-line fault diagnosis of actuators and accommodation plays a major role in the design of avionic systems. These approaches, known as Fault Tolerant Flight Control systems (FTFCs) are able to adapt to such sudden faults while keeping avionics systems lighter and less expensive. In this paper, a (FTFC) system based on the Geometric Approach and a Reconfigurable Flight Control (RFC) are presented. The Geometric approach is used for cosmic ray fault reconstruction, while Sliding Mode Control (SMC) based on Lyapunov stability theory is designed for the reconfiguration of the controller in order to compensate the fault effect. Matlab®/Simulink® simulations are performed to illustrate the effectiveness and robustness of the proposed flight control system against actuators’ faulty signal caused by cosmic rays. The results demonstrate the successful real-time implementation of the proposed FTFC system on a non-linear 6 DOF aircraft model.
Keywords: Actuators’ faults, Fault detection and diagnosis, Fault tolerant flight control, Sliding mode control, Geometric approach for fault reconstruction, Lyapunov stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25824122 Libretto Thematology in Rossini's Operas and Its Formation by the Composer
Authors: Areti Tziboula, Anna-Maria Rentzeperi-Tsonou
Abstract:
The present study examines the way Gioachino Rossini’s librettos are selected and formed demonstrating the evolutionary trajectory of the composer during his operatic career. Rossini, a dominant figure in the early 19th century Italian opera, is demanding in his choice of librettos and has a preference for subjects inspired by European literature, of his time or earlier. He begins his operatic career with farsae and operas buffae, but he mainly continues with operas seriae, to end it with a grand opera that conforms to the spirit of romanticism as manifested in Paris of his time. His farsae, operas buffae and comic operas in general are representative of the trends of the time: in some the irrational and the exaggeration prevail, in others the upheavals, others are semi-serious and emotional with a happy ending and others are comedies with more realistic characters, but usually the styles are mixed and complement each other. The stories that refer to his modern era unfold mocking human characters, beliefs attitudes and their expressions in every day habits, satirizing current affairs, presenting innovative elements in dramatic intervention and dealing with a variety of social and national issues. Count Ory, his final comic work, consists of a complex witty urban comic opera entwined with romantic sensitivity. The themes he chooses for his operas seriae are characterized by tragic passion, take place in the era of the Trojan War, the Roman Empire, the Middle Ages, and the Age of the Crusades and are set in Italy, England, Poland, Greece, Switzerland, Israel and Egypt. In his early works he sketches the characters remotely, objectively and with static, reflexive emotional expression and a happy ending. Then he continues with operas for the San Carlo Theater, which are characterized by experimentation and innovation to end up his Italian operatic career with the ostensibly backward but in fact tragic Semiramis followed in Paris by William Tell, his ultimate dramatic achievement. There are indirect references to burning issues of his era but the censorship of the time does not allow direct reference to topics that would upset the status quo. In addition, Rossini lives in a temporal period of peace after the Napoleonic Wars and by temperament he resists openly engaging in political strife. Furthermore, the need for survival necessitates the search for the more profitable contracts. In conclusion, Rossini, as a liberal personality, shapes his librettos without interruptions or setbacks, with ideas that come out after a lot of thought and a strong sense of purpose. He moves from the moral and aesthetic clarity of the classic tradition of his early works to a more elaborate and morally ambiguous romantic style in a moderate and hesitant way.
Keywords: Gioachino Rossini, libretto, nineteenth century music, opera.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3944121 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study
Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari
Abstract:
The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two wellknown scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a casestudy. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means, which allows simulating the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With this model is possible to obtain quite accurate and reliable results that allow identifying effective combinations building-HVAC system. The second step has consisted of using output data obtained as input to the calculation model, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing determining the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while our calculation model provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model for different design options.
Keywords: Energy, Buildings, Systems, Evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20314120 Media Facades Utilization for Sustainable Tourism Promotion in Historic Places: Case Study of the Walled City of Famagusta, North Cyprus
Authors: Nikou Javadi, Uğur Dağlı
Abstract:
The importance of culture and tourism in the attractiveness and competitiveness of the countries is central, and many regions are evidencing their cultural assets, tangible and intangible, as a means to create comparative advantages in tourism and produce a distinctive place in response to the pressures of globalization. Culture and tourism are interlinked because of their obvious combination and growth potential. Cultural tourism is a crucial global tourism market with fast growing. Regions can develop significant relations between culture and tourism to increase their attractiveness as places to visit, live and invest, increasing their competitiveness. Accordingly, having new and creative approach to historical areas as cultural value-based destinations can improve their conditions to promote tourism. Furthermore, in 21st century, media become the most important factor affecting the development of urban cities, including public places. As a result of the digital revolution, re-imaging and re-linkage public places by media are essential to create more interactions between public spaces and users, interaction media display, and urban screens, one of the most important defined media. This interaction can transform the urban space from being neglected to be more interactive space with users, especially the pedestrians. The paper focuses on The Walled City of Famagusta. As many other historic quarters elsewhere in the world, is in a process, of decay and deterioration, and its functionally distinctive areas are severely threatened by physical, functional, locational, and image obsolescence at varying degrees. So the focus on the future development of this area through tourism promotion can be an appropriate decision for the monument enhancement of the spatial quality in Walled City of Famagusta. In this paper, it is aimed to identify the effects of these new digital factors to transform public spaces especially in historic urban areas to promote creative tourism. Accordingly, two different analysis methods are used as well as a theoretical review. The first is case study on site and the second is Close ended questionnaire, test many concepts raised in this paper. The physical analysis on site carried out in order to evaluate the walled city restoration for touristic purpose. Besides, theoretical review is done in order to provide background to the subject and cleared Factors to attract tourists.
Keywords: Historical areas, Media Facade, Sustainable tourism, Walled city of Famagusta.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22594119 Non-Methane Hydrocarbons Emission during the Photocopying Process
Authors: Kiurski S. Jelena, Aksentijević M. Snežana, Kecić S. Vesna, Oros B. Ivana
Abstract:
Prosperity of electronic equipment in photocopying environment not only has improved work efficiency, but also has changed indoor air quality. Considering the number of photocopying employed, indoor air quality might be worse than in general office environments. Determining the contribution from any type of equipment to indoor air pollution is a complex matter. Non-methane hydrocarbons are known to have an important role on air quality due to their high reactivity. The presence of hazardous pollutants in indoor air has been detected in one photocopying shop in Novi Sad, Serbia. Air samples were collected and analyzed for five days, during 8-hr working time in three time intervals, whereas three different sampling points were determined. Using multiple linear regression model and software package STATISTICA 10 the concentrations of occupational hazards and microclimates parameters were mutually correlated. Based on the obtained multiple coefficients of determination (0.3751, 0.2389 and 0.1975), a weak positive correlation between the observed variables was determined. Small values of parameter F indicated that there was no statistically significant difference between the concentration levels of nonmethane hydrocarbons and microclimates parameters. The results showed that variable could be presented by the general regression model: y = b0 + b1xi1+ b2xi2. Obtained regression equations allow to measure the quantitative agreement between the variables and thus obtain more accurate knowledge of their mutual relations.Keywords: Indoor air quality, multiple regression analysis, nonmethane hydrocarbons, photocopying process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19804118 Control of Biofilm Formation and Inorganic Particle Accumulation on Reverse Osmosis Membrane by Hypochlorite Washing
Authors: Masaki Ohno, Cervinia Manalo, Tetsuji Okuda, Satoshi Nakai, Wataru Nishijima
Abstract:
Reverse osmosis (RO) membranes have been widely used for desalination to purify water for drinking and other purposes. Although at present most RO membranes have no resistance to chlorine, chlorine-resistant membranes are being developed. Therefore, direct chlorine treatment or chlorine washing will be an option in preventing biofouling on chlorine-resistant membranes. Furthermore, if particle accumulation control is possible by using chlorine washing, expensive pretreatment for particle removal can be removed or simplified. The objective of this study was to determine the effective hypochlorite washing condition required for controlling biofilm formation and inorganic particle accumulation on RO membrane in a continuous flow channel with RO membrane and spacer. In this study, direct chlorine washing was done by soaking fouled RO membranes in hypochlorite solution and fluorescence intensity was used to quantify biofilm on the membrane surface. After 48 h of soaking the membranes in high fouling potential waters, the fluorescence intensity decreased to 0 from 470 using the following washing conditions: 10 mg/L chlorine concentration, 2 times/d washing interval, and 30 min washing time. The chlorine concentration required to control biofilm formation decreased as the chlorine concentration (0.5–10 mg/L), the washing interval (1–4 times/d), or the washing time (1–30 min) increased. For the sample solutions used in the study, 10 mg/L chlorine concentration with 2 times/d interval, and 5 min washing time was required for biofilm control. The optimum chlorine washing conditions obtained from soaking experiments proved to be applicable also in controlling biofilm formation in continuous flow experiments. Moreover, chlorine washing employed in controlling biofilm with suspended particles resulted in lower amounts of organic (0.03 mg/cm2) and inorganic (0.14 mg/cm2) deposits on the membrane than that for sample water without chlorine washing (0.14 mg/cm2 and 0.33 mg/cm2, respectively). The amount of biofilm formed was 79% controlled by continuous washing with 10 mg/L of free chlorine concentration, and the inorganic accumulation amount decreased by 58% to levels similar to that of pure water with kaolin (0.17 mg/cm2) as feed water. These results confirmed the acceleration of particle accumulation due to biofilm formation, and that the inhibition of biofilm growth can almost completely reduce further particle accumulation. In addition, effective hypochlorite washing condition which can control both biofilm formation and particle accumulation could be achieved.
Keywords: Biofouling control, hypochlorite, reverse osmosis, washing condition optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11954117 BeamGA Median: A Hybrid Heuristic Search Approach
Authors: Ghada Badr, Manar Hosny, Nuha Bintayyash, Eman Albilali, Souad Larabi Marie-Sainte
Abstract:
The median problem is significantly applied to derive the most reasonable rearrangement phylogenetic tree for many species. More specifically, the problem is concerned with finding a permutation that minimizes the sum of distances between itself and a set of three signed permutations. Genomes with equal number of genes but different order can be represented as permutations. In this paper, an algorithm, namely BeamGA median, is proposed that combines a heuristic search approach (local beam) as an initialization step to generate a number of solutions, and then a Genetic Algorithm (GA) is applied in order to refine the solutions, aiming to achieve a better median with the smallest possible reversal distance from the three original permutations. In this approach, any genome rearrangement distance can be applied. In this paper, we use the reversal distance. To the best of our knowledge, the proposed approach was not applied before for solving the median problem. Our approach considers true biological evolution scenario by applying the concept of common intervals during the GA optimization process. This allows us to imitate a true biological behavior and enhance genetic approach time convergence. We were able to handle permutations with a large number of genes, within an acceptable time performance and with same or better accuracy as compared to existing algorithms.Keywords: Median problem, phylogenetic tree, permutation, genetic algorithm, beam search, genome rearrangement distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9834116 Numerical Modelling of Dust Propagation in the Atmosphere of Tbilisi City in Case of Western Background Light Air
Authors: N. Gigauri, V. Kukhalashvili, A. Surmava, L. Intskirveli, L. Gverdtsiteli
Abstract:
Tbilisi, a large city of the South Caucasus, is a junction point connecting Asia and Europe, Russia and republics of the Asia Minor. Over the last years, its atmosphere has been experienced an increasing anthropogenic load. Numerical modeling method is used for study of Tbilisi atmospheric air pollution. By means of 3D non-linear non-steady numerical model a peculiarity of city atmosphere pollution is investigated during background western light air. Dust concentration spatial and time changes are determined. There are identified the zones of high, average and less pollution, dust accumulation areas, transfer directions etc. By numerical modeling, there is shown that the process of air pollution by the dust proceeds in four stages, and they depend on the intensity of motor traffic, the micro-relief of the city, and the location of city mains. In the interval of time 06:00-09:00 the intensive growth, 09:00-15:00 a constancy or weak decrease, 18:00-21:00 an increase, and from 21:00 to 06:00 a reduction of the dust concentrations take place. The highly polluted areas are located in the vicinity of the city center and at some peripherical territories of the city, where the maximum dust concentration at 9PM is equal to 2 maximum allowable concentrations. The similar investigations conducted in case of various meteorological situations will enable us to compile the map of background urban pollution and to elaborate practical measures for ambient air protection.
Keywords: Numerical modelling, source of pollution, dust propagation, western light air.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4994115 Urban Greenery in the Greatest Polish Cities: Analysis of Spatial Concentration
Authors: Elżbieta Antczak
Abstract:
Cities offer important opportunities for economic development and for expanding access to basic services, including health care and education, for large numbers of people. Moreover, green areas (as an integral part of sustainable urban development) present a major opportunity for improving urban environments, quality of lives and livelihoods. This paper examines, using spatial concentration and spatial taxonomic measures, regional diversification of greenery in the cities of Poland. The analysis includes location quotients, Lorenz curve, Locational Gini Index, and the synthetic index of greenery and spatial statistics tools: (1) To verify the occurrence of strong concentration or dispersion of the phenomenon in time and space depending on the variable category, and, (2) To study if the level of greenery depends on the spatial autocorrelation. The data includes the greatest Polish cities, categories of the urban greenery (parks, lawns, street greenery, and green areas on housing estates, cemeteries, and forests) and the time span 2004-2015. According to the obtained estimations, most of cites in Poland are already taking measures to become greener. However, in the country there are still many barriers to well-balanced urban greenery development (e.g. uncontrolled urban sprawl, poor management as well as lack of spatial urban planning systems).
Keywords: Greenery, urban areas, regional spatial diversification and concentration, spatial taxonomic measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12614114 Customer Need Type Classification Model using Data Mining Techniques for Recommender Systems
Authors: Kyoung-jae Kim
Abstract:
Recommender systems are usually regarded as an important marketing tool in the e-commerce. They use important information about users to facilitate accurate recommendation. The information includes user context such as location, time and interest for personalization of mobile users. We can easily collect information about location and time because mobile devices communicate with the base station of the service provider. However, information about user interest can-t be easily collected because user interest can not be captured automatically without user-s approval process. User interest usually represented as a need. In this study, we classify needs into two types according to prior research. This study investigates the usefulness of data mining techniques for classifying user need type for recommendation systems. We employ several data mining techniques including artificial neural networks, decision trees, case-based reasoning, and multivariate discriminant analysis. Experimental results show that CHAID algorithm outperforms other models for classifying user need type. This study performs McNemar test to examine the statistical significance of the differences of classification results. The results of McNemar test also show that CHAID performs better than the other models with statistical significance.Keywords: Customer need type, Data mining techniques, Recommender system, Personalization, Mobile user.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21524113 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.
Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13924112 Evaluating the Nexus between Energy Demand and Economic Growth Using the VECM Approach: Case Study of Nigeria, China, and the United States
Authors: Rita U. Onolemhemhen, Saheed L. Bello, Akin P. Iwayemi
Abstract:
The effectiveness of energy demand policy depends on identifying the key drivers of energy demand both in the short-run and the long-run. This paper examines the influence of regional differences on the link between energy demand and other explanatory variables for Nigeria, China and USA using the Vector Error Correction Model (VECM) approach. This study employed annual time series data on energy consumption (ED), real gross domestic product (GDP) per capita (RGDP), real energy prices (P) and urbanization (N) for a thirty-six-year sample period. The utilized time-series data are sourced from World Bank’s World Development Indicators (WDI, 2016) and US Energy Information Administration (EIA). Results from the study, shows that all the independent variables (income, urbanization, and price) substantially affect the long-run energy consumption in Nigeria, USA and China, whereas, income has no significant effect on short-run energy demand in USA and Nigeria. In addition, the long-run effect of urbanization is relatively stronger in China. Urbanization is a key factor in energy demand, it therefore recommended that more attention should be given to the development of rural communities to reduce the inflow of migrants into urban communities which causes the increase in energy demand and energy excesses should be penalized while energy management should be incentivized.Keywords: Economic growth, energy demand, income, real GDP, urbanization, VECM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9994111 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques
Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian
Abstract:
Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.Keywords: Data mining, K-means, road traffic accidents, Waze, Weka.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1225