Search results for: time multiplier setting.
4405 Compressive Strength and Workability Characteristics of Low-Calcium Fly ash-based Self-Compacting Geopolymer Concrete
Authors: M. Fareed Ahmed, M. Fadhil Nuruddin, Nasir Shafiq
Abstract:
Due to growing environmental concerns of the cement industry, alternative cement technologies have become an area of increasing interest. It is now believed that new binders are indispensable for enhanced environmental and durability performance. Self-compacting Geopolymer concrete is an innovative method and improved way of concreting operation that does not require vibration for placing it and is produced by complete elimination of ordinary Portland cement. This paper documents the assessment of the compressive strength and workability characteristics of low-calcium fly ash based selfcompacting geopolymer concrete. The essential workability properties of the freshly prepared Self-compacting Geopolymer concrete such as filling ability, passing ability and segregation resistance were evaluated by using Slump flow, V-funnel, L-box and J-ring test methods. The fundamental requirements of high flowability and segregation resistance as specified by guidelines on Self Compacting Concrete by EFNARC were satisfied. In addition, compressive strength was determined and the test results are included here. This paper also reports the effect of extra water, curing time and curing temperature on the compressive strength of self-compacting geopolymer concrete. The test results show that extra water in the concrete mix plays a significant role. Also, longer curing time and curing the concrete specimens at higher temperatures will result in higher compressive strength.Keywords: Fly ash, Geopolymer Concrete, Self-compactingconcrete, Self-compacting Geopolymer concrete
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45844404 Measurement of Real Time Drive Cycle for Indian Roads and Estimation of Component Sizing for HEV using LABVIEW
Authors: Varsha Shah, Patel Pritesh, Patel Sagar, PrasantaKundu, RanjanMaheshwari
Abstract:
Performance of vehicle depends on driving patterns and vehicle drive train configuration. Driving patterns depends on traffic condition, road condition and driver behavior. HEV design is carried out under certain constrain like vehicle operating range, acceleration, decelerations, maximum speed and road grades which are directly related to the driving patterns. Therefore the detailed study on HEV performance over a different drive cycle is required for selection and sizing of HEV components. A simple hardware is design to measured velocity v/s time profile of the vehicle by operating vehicle on Indian roads under real traffic conditions. To size the HEV components, a detailed dynamic model of the vehicle is developed considering the effect of inertia of rotating components like wheels, drive chain, engine and electric motor. Using vehicle model and different Indian drive cycles data, total tractive power demanded by vehicle and power supplied by individual components has been calculated.Using above information selection and estimation of component sizing for HEV is carried out so that HEV performs efficiently under hostile driving condition. Complete analysis is carried out in LABVIEW.Keywords: BLDC motor, Driving cycle, LABVIEW Ultracapacitors, Vehicle Dynamics,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39014403 A Study on Bilingual Semantic Processing: Category Effects and Age Effects
Authors: Lai Yi-Hsiu
Abstract:
The present study addressed the nature of bilingual semantic processing in Mandarin Chinese and Southern Min and examined category effects and age effects. Nineteen bilingual adults of Mandarin Chinese and Southern Min, nine monolingual seniors of Mandarin Chinese, and ten monolingual seniors of Southern Min in Taiwan individually completed two semantic tasks: Picture naming and category fluency tasks. The instruments for the naming task were sixty black-and-white pictures, including thirty-five object pictures and twenty-five action pictures. The category fluency task also consisted of two semantic categories – objects (or nouns) and actions (or verbs). The reaction time for each picture/question was additionally calculated and analyzed. Oral productions in Mandarin Chinese and in Southern Min were compared and discussed to examine the category effects and age effects. The results of the category fluency task indicated that the content of information of these seniors was comparatively deteriorated, and thus they produced a smaller number of semantic-lexical items. Significant group differences were also found in the reaction time results. Category effects were significant for both adults and seniors in the semantic fluency task. The findings of the present study will help characterize the nature of the bilingual semantic processing of adults and seniors, and contribute to the fields of contrastive and corpus linguistics.
Keywords: Bilingual semantic processing, aging, Mandarin Chinese, Southern Min.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12944402 Biochemical Characteristics of Sorghum Flour Fermented and/or Supplemented with Chickpea Flour
Authors: Omima E. Fadlallah, Abdullahi H. El Tinay, Elfadil E. Babiker
Abstract:
Sorghum flour was supplemented with 15 and 30% chickpea flour. Sorghum flour and the supplement were fermented at 35 oC for 0, 8, 16, and 24 h. Changes in pH, titrable acidity, total soluble solids, protein content, in vitro protein digestibility and amino acid composition were investigated during fermentation and/or after supplementation of sorghum flour with chickpea. The pH of the fermenting material decreased sharply with a concomitant increase in the titrable acidity. The total soluble solids remained unchanged with progressive fermentation time. The protein content of sorghum cultivar was found to be 9.27 and that of chickpea was 22.47%. The protein content of sorghum cultivar after supplementation with15 and 30% chickpea was significantly (P ≤ 0.05) increased to 11.78 and 14.55%, respectively. The protein digestibility also increased after fermentation from 13.35 to 30.59 and 40.56% for the supplements, respectively. Further increment in protein content and digestibility was observed when supplemented and unsupplemented samples were fermented for different periods of time. Cooking of fermented samples was found to increase the protein content slightly and decreased digestibility for both supplements. Amino acid content of fermented and fermented and cooked supplements was determined. Supplementation was found to increase the lysine and therionine content. Cooking following fermentation decreased lysine, isoleucine, valine and sulfur containg amino acids.Keywords: Amino acid, Chickpea, Cooking, Fermentation, protein, Sorghum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26214401 The Requirements of Developing a Framework for Successful Adoption of Quality Management Systems in the Construction Industry
Authors: Mohammed Ali Ahmed, Vaughan Coffey, Bo Xia
Abstract:
Quality management systems (QMSs) in the construction industry are often implemented to ensure that sufficient effort is made by companies to achieve the required levels of quality for clients. Attainment of these quality levels can result in greater customer satisfaction, which is fundamental to ensure long-term competitiveness for construction companies. However, the construction sector is still lagging behind other industries in terms of its successful adoption of QMSs, due to the relative lack of acceptance of the benefits of these systems among industry stakeholders, as well as from other barriers related to implementing them. Thus, there is a critical need to undertake a detailed and comprehensive exploration of adoption of QMSs in the construction sector. This paper comprehensively investigates in the construction sector setting, the impacts of all the salient factors surrounding successful implementation of QMSs in building organizations, especially those of external factors. This study is part of an ongoing PhD project, which aims to develop a new framework that integrates both internal and external factors affecting QMS implementation. To achieve the paper aim and objectives, interviews will be conducted to define the external factors influencing the adoption of QMSs, and to obtain holistic critical success factors (CSFs) for implementing these systems. In the next stage of data collection, a questionnaire survey will be developed to investigate the prime barriers facing the adoption of QMSs, the CSFs for their implementation, and the external factors affecting the adoption of these systems. Following the survey, case studies will be undertaken to validate and explain in greater detail the real effects of these factors on QMSs adoption. Specifically, this paper evaluates the effects of the external factors in terms of their impact on implementation success within the selected case studies. Using findings drawn from analyzing the data obtained from these various approaches, specific recommendations for the successful implementation of QMSs will be presented, and an operational framework will be developed. Finally, through a focus group, the findings of the study and the new developed framework will be validated. Ultimately, this framework will be made available to the construction industry to facilitate the greater adoption and implementation of QMSs. In addition, deployment of the applicable recommendations suggested by the study will be shared with the construction industry to more effectively help construction companies to implement QMSs, and overcome the barriers experienced by businesses, thus promoting the achievement of higher levels of quality and customer satisfaction.Keywords: Barriers, critical success factors, external factors, internal factors, quality management systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20704400 Construction of Attitude Reference Benchmark for Test of Star Sensor Based on Precise Timing
Authors: Tingting Lu, Yonghai Wang, Haiyong Wang, Jiaqi Liu
Abstract:
To satisfy the need of outfield tests of star sensors, a method is put forward to construct the reference attitude benchmark. Firstly, its basic principle is introduced; Then, all the separate conversion matrixes are deduced, which include: the conversion matrix responsible for the transformation from the Earth Centered Inertial frame i to the Earth-centered Earth-fixed frame w according to the time of an atomic clock, the conversion matrix from frame w to the geographic frame t, and the matrix from frame t to the platform frame p, so the attitude matrix of the benchmark platform relative to the frame i can be obtained using all the three matrixes as the multiplicative factors; Next, the attitude matrix of the star sensor relative to frame i is got when the mounting matrix from frame p to the star sensor frame s is calibrated, and the reference attitude angles for star sensor outfield tests can be calculated from the transformation from frame i to frame s; Finally, the computer program is finished to solve the reference attitudes, and the error curves are drawn about the three axis attitude angles whose absolute maximum error is just 0.25ÔÇ│. The analysis on each loop and the final simulating results manifest that the method by precise timing to acquire the absolute reference attitude is feasible for star sensor outfield tests.Keywords: Atomic time, attitude determination, coordinate conversion, inertial coordinate system, star sensor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12004399 Applied Actuator Fault Accommodation in Flight Control Systems Using Fault Reconstruction Based FDD and SMC Reconfiguration
Authors: A. Ghodbane, M. Saad, J.-F. Boland, C. Thibeault
Abstract:
Historically, actuators’ redundancy was used to deal with faults occurring suddenly in flight systems. This technique was generally expensive, time consuming and involves increased weight and space in the system. Therefore, nowadays, the on-line fault diagnosis of actuators and accommodation plays a major role in the design of avionic systems. These approaches, known as Fault Tolerant Flight Control systems (FTFCs) are able to adapt to such sudden faults while keeping avionics systems lighter and less expensive. In this paper, a (FTFC) system based on the Geometric Approach and a Reconfigurable Flight Control (RFC) are presented. The Geometric approach is used for cosmic ray fault reconstruction, while Sliding Mode Control (SMC) based on Lyapunov stability theory is designed for the reconfiguration of the controller in order to compensate the fault effect. Matlab®/Simulink® simulations are performed to illustrate the effectiveness and robustness of the proposed flight control system against actuators’ faulty signal caused by cosmic rays. The results demonstrate the successful real-time implementation of the proposed FTFC system on a non-linear 6 DOF aircraft model.
Keywords: Actuators’ faults, Fault detection and diagnosis, Fault tolerant flight control, Sliding mode control, Geometric approach for fault reconstruction, Lyapunov stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25764398 Libretto Thematology in Rossini's Operas and Its Formation by the Composer
Authors: Areti Tziboula, Anna-Maria Rentzeperi-Tsonou
Abstract:
The present study examines the way Gioachino Rossini’s librettos are selected and formed demonstrating the evolutionary trajectory of the composer during his operatic career. Rossini, a dominant figure in the early 19th century Italian opera, is demanding in his choice of librettos and has a preference for subjects inspired by European literature, of his time or earlier. He begins his operatic career with farsae and operas buffae, but he mainly continues with operas seriae, to end it with a grand opera that conforms to the spirit of romanticism as manifested in Paris of his time. His farsae, operas buffae and comic operas in general are representative of the trends of the time: in some the irrational and the exaggeration prevail, in others the upheavals, others are semi-serious and emotional with a happy ending and others are comedies with more realistic characters, but usually the styles are mixed and complement each other. The stories that refer to his modern era unfold mocking human characters, beliefs attitudes and their expressions in every day habits, satirizing current affairs, presenting innovative elements in dramatic intervention and dealing with a variety of social and national issues. Count Ory, his final comic work, consists of a complex witty urban comic opera entwined with romantic sensitivity. The themes he chooses for his operas seriae are characterized by tragic passion, take place in the era of the Trojan War, the Roman Empire, the Middle Ages, and the Age of the Crusades and are set in Italy, England, Poland, Greece, Switzerland, Israel and Egypt. In his early works he sketches the characters remotely, objectively and with static, reflexive emotional expression and a happy ending. Then he continues with operas for the San Carlo Theater, which are characterized by experimentation and innovation to end up his Italian operatic career with the ostensibly backward but in fact tragic Semiramis followed in Paris by William Tell, his ultimate dramatic achievement. There are indirect references to burning issues of his era but the censorship of the time does not allow direct reference to topics that would upset the status quo. In addition, Rossini lives in a temporal period of peace after the Napoleonic Wars and by temperament he resists openly engaging in political strife. Furthermore, the need for survival necessitates the search for the more profitable contracts. In conclusion, Rossini, as a liberal personality, shapes his librettos without interruptions or setbacks, with ideas that come out after a lot of thought and a strong sense of purpose. He moves from the moral and aesthetic clarity of the classic tradition of his early works to a more elaborate and morally ambiguous romantic style in a moderate and hesitant way.
Keywords: Gioachino Rossini, libretto, nineteenth century music, opera.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3854397 Economic Efficiency of Cassava Production in Nimba County, Liberia: An Output-Oriented Approach
Authors: Kollie B. Dogba, Willis Oluoch-Kosura, Chepchumba Chumo
Abstract:
In Liberia, many of the agricultural households cultivate cassava for either sustenance purposes, or to generate farm income. Many of the concentrated cassava farmers reside in Nimba, a north-eastern County that borders two other economies: the Republics of Cote D’Ivoire and Guinea. With a high demand for cassava output and products in emerging Asian markets coupled with an objective of the Liberia agriculture policies to increase the competitiveness of valued agriculture crops; there is a need to examine the level of resource-use efficiency for many agriculture crops. However, there is a scarcity of information on the efficiency of many agriculture crops, including cassava. Hence the study applying an output-oriented method seeks to assess the economic efficiency of cassava farmers in Nimba County, Liberia. A multi-stage sampling technique was employed to generate a sample for the study. From 216 cassava farmers, data related to on-farm attributes, socio-economic and institutional factors were collected. The stochastic frontier models, using the Translog functional forms, of production and revenue, were used to determine the level of revenue efficiency and its determinants. The result showed that most of the cassava farmers are male (60%). Many of the farmers are either married, engaged or living together with a spouse (83%), with a mean household size of nine persons. Farmland is prevalently obtained by inheritance (95%), average farm size is 1.34 hectares, and most cassava farmers did not access agriculture credits (76%) and extension services (91%). The mean cassava output per hectare is 1,506.02 kg, which estimates average revenue of L$23,551.16 (Liberian dollars). Empirical results showed that the revenue efficiency of cassava farmers varies from 0.1% to 73.5%; with the mean revenue efficiency of 12.9%. This indicates that on average, there is a vast potential of 87.1% to increase the economic efficiency of cassava farmers in Nimba by improving technical and allocative efficiencies. For the significant determinants of revenue efficiency, age and group membership had negative effects on revenue efficiency of cassava production; while farming experience, access to extension, formal education, and average wage rate have positive effects. The study recommends the setting-up and incentivizing of farmer field schools for cassava farmers to primarily share their farming experiences with others and to learn robust cultivation techniques of sustainable agriculture. Also, farm managers and farmers should consider a fix wage rate in labor contracts for all stages of cassava farming.
Keywords: Economic efficiency, frontier production, and revenue functions, Liberia, Nimba County, output-oriented, revenue efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6994396 Non-Methane Hydrocarbons Emission during the Photocopying Process
Authors: Kiurski S. Jelena, Aksentijević M. Snežana, Kecić S. Vesna, Oros B. Ivana
Abstract:
Prosperity of electronic equipment in photocopying environment not only has improved work efficiency, but also has changed indoor air quality. Considering the number of photocopying employed, indoor air quality might be worse than in general office environments. Determining the contribution from any type of equipment to indoor air pollution is a complex matter. Non-methane hydrocarbons are known to have an important role on air quality due to their high reactivity. The presence of hazardous pollutants in indoor air has been detected in one photocopying shop in Novi Sad, Serbia. Air samples were collected and analyzed for five days, during 8-hr working time in three time intervals, whereas three different sampling points were determined. Using multiple linear regression model and software package STATISTICA 10 the concentrations of occupational hazards and microclimates parameters were mutually correlated. Based on the obtained multiple coefficients of determination (0.3751, 0.2389 and 0.1975), a weak positive correlation between the observed variables was determined. Small values of parameter F indicated that there was no statistically significant difference between the concentration levels of nonmethane hydrocarbons and microclimates parameters. The results showed that variable could be presented by the general regression model: y = b0 + b1xi1+ b2xi2. Obtained regression equations allow to measure the quantitative agreement between the variables and thus obtain more accurate knowledge of their mutual relations.Keywords: Indoor air quality, multiple regression analysis, nonmethane hydrocarbons, photocopying process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19744395 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.
Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11984394 Control of Biofilm Formation and Inorganic Particle Accumulation on Reverse Osmosis Membrane by Hypochlorite Washing
Authors: Masaki Ohno, Cervinia Manalo, Tetsuji Okuda, Satoshi Nakai, Wataru Nishijima
Abstract:
Reverse osmosis (RO) membranes have been widely used for desalination to purify water for drinking and other purposes. Although at present most RO membranes have no resistance to chlorine, chlorine-resistant membranes are being developed. Therefore, direct chlorine treatment or chlorine washing will be an option in preventing biofouling on chlorine-resistant membranes. Furthermore, if particle accumulation control is possible by using chlorine washing, expensive pretreatment for particle removal can be removed or simplified. The objective of this study was to determine the effective hypochlorite washing condition required for controlling biofilm formation and inorganic particle accumulation on RO membrane in a continuous flow channel with RO membrane and spacer. In this study, direct chlorine washing was done by soaking fouled RO membranes in hypochlorite solution and fluorescence intensity was used to quantify biofilm on the membrane surface. After 48 h of soaking the membranes in high fouling potential waters, the fluorescence intensity decreased to 0 from 470 using the following washing conditions: 10 mg/L chlorine concentration, 2 times/d washing interval, and 30 min washing time. The chlorine concentration required to control biofilm formation decreased as the chlorine concentration (0.5–10 mg/L), the washing interval (1–4 times/d), or the washing time (1–30 min) increased. For the sample solutions used in the study, 10 mg/L chlorine concentration with 2 times/d interval, and 5 min washing time was required for biofilm control. The optimum chlorine washing conditions obtained from soaking experiments proved to be applicable also in controlling biofilm formation in continuous flow experiments. Moreover, chlorine washing employed in controlling biofilm with suspended particles resulted in lower amounts of organic (0.03 mg/cm2) and inorganic (0.14 mg/cm2) deposits on the membrane than that for sample water without chlorine washing (0.14 mg/cm2 and 0.33 mg/cm2, respectively). The amount of biofilm formed was 79% controlled by continuous washing with 10 mg/L of free chlorine concentration, and the inorganic accumulation amount decreased by 58% to levels similar to that of pure water with kaolin (0.17 mg/cm2) as feed water. These results confirmed the acceleration of particle accumulation due to biofilm formation, and that the inhibition of biofilm growth can almost completely reduce further particle accumulation. In addition, effective hypochlorite washing condition which can control both biofilm formation and particle accumulation could be achieved.
Keywords: Biofouling control, hypochlorite, reverse osmosis, washing condition optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11894393 BeamGA Median: A Hybrid Heuristic Search Approach
Authors: Ghada Badr, Manar Hosny, Nuha Bintayyash, Eman Albilali, Souad Larabi Marie-Sainte
Abstract:
The median problem is significantly applied to derive the most reasonable rearrangement phylogenetic tree for many species. More specifically, the problem is concerned with finding a permutation that minimizes the sum of distances between itself and a set of three signed permutations. Genomes with equal number of genes but different order can be represented as permutations. In this paper, an algorithm, namely BeamGA median, is proposed that combines a heuristic search approach (local beam) as an initialization step to generate a number of solutions, and then a Genetic Algorithm (GA) is applied in order to refine the solutions, aiming to achieve a better median with the smallest possible reversal distance from the three original permutations. In this approach, any genome rearrangement distance can be applied. In this paper, we use the reversal distance. To the best of our knowledge, the proposed approach was not applied before for solving the median problem. Our approach considers true biological evolution scenario by applying the concept of common intervals during the GA optimization process. This allows us to imitate a true biological behavior and enhance genetic approach time convergence. We were able to handle permutations with a large number of genes, within an acceptable time performance and with same or better accuracy as compared to existing algorithms.Keywords: Median problem, phylogenetic tree, permutation, genetic algorithm, beam search, genome rearrangement distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9794392 Numerical Modelling of Dust Propagation in the Atmosphere of Tbilisi City in Case of Western Background Light Air
Authors: N. Gigauri, V. Kukhalashvili, A. Surmava, L. Intskirveli, L. Gverdtsiteli
Abstract:
Tbilisi, a large city of the South Caucasus, is a junction point connecting Asia and Europe, Russia and republics of the Asia Minor. Over the last years, its atmosphere has been experienced an increasing anthropogenic load. Numerical modeling method is used for study of Tbilisi atmospheric air pollution. By means of 3D non-linear non-steady numerical model a peculiarity of city atmosphere pollution is investigated during background western light air. Dust concentration spatial and time changes are determined. There are identified the zones of high, average and less pollution, dust accumulation areas, transfer directions etc. By numerical modeling, there is shown that the process of air pollution by the dust proceeds in four stages, and they depend on the intensity of motor traffic, the micro-relief of the city, and the location of city mains. In the interval of time 06:00-09:00 the intensive growth, 09:00-15:00 a constancy or weak decrease, 18:00-21:00 an increase, and from 21:00 to 06:00 a reduction of the dust concentrations take place. The highly polluted areas are located in the vicinity of the city center and at some peripherical territories of the city, where the maximum dust concentration at 9PM is equal to 2 maximum allowable concentrations. The similar investigations conducted in case of various meteorological situations will enable us to compile the map of background urban pollution and to elaborate practical measures for ambient air protection.
Keywords: Numerical modelling, source of pollution, dust propagation, western light air.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4894391 Urban Greenery in the Greatest Polish Cities: Analysis of Spatial Concentration
Authors: Elżbieta Antczak
Abstract:
Cities offer important opportunities for economic development and for expanding access to basic services, including health care and education, for large numbers of people. Moreover, green areas (as an integral part of sustainable urban development) present a major opportunity for improving urban environments, quality of lives and livelihoods. This paper examines, using spatial concentration and spatial taxonomic measures, regional diversification of greenery in the cities of Poland. The analysis includes location quotients, Lorenz curve, Locational Gini Index, and the synthetic index of greenery and spatial statistics tools: (1) To verify the occurrence of strong concentration or dispersion of the phenomenon in time and space depending on the variable category, and, (2) To study if the level of greenery depends on the spatial autocorrelation. The data includes the greatest Polish cities, categories of the urban greenery (parks, lawns, street greenery, and green areas on housing estates, cemeteries, and forests) and the time span 2004-2015. According to the obtained estimations, most of cites in Poland are already taking measures to become greener. However, in the country there are still many barriers to well-balanced urban greenery development (e.g. uncontrolled urban sprawl, poor management as well as lack of spatial urban planning systems).
Keywords: Greenery, urban areas, regional spatial diversification and concentration, spatial taxonomic measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12524390 Customer Need Type Classification Model using Data Mining Techniques for Recommender Systems
Authors: Kyoung-jae Kim
Abstract:
Recommender systems are usually regarded as an important marketing tool in the e-commerce. They use important information about users to facilitate accurate recommendation. The information includes user context such as location, time and interest for personalization of mobile users. We can easily collect information about location and time because mobile devices communicate with the base station of the service provider. However, information about user interest can-t be easily collected because user interest can not be captured automatically without user-s approval process. User interest usually represented as a need. In this study, we classify needs into two types according to prior research. This study investigates the usefulness of data mining techniques for classifying user need type for recommendation systems. We employ several data mining techniques including artificial neural networks, decision trees, case-based reasoning, and multivariate discriminant analysis. Experimental results show that CHAID algorithm outperforms other models for classifying user need type. This study performs McNemar test to examine the statistical significance of the differences of classification results. The results of McNemar test also show that CHAID performs better than the other models with statistical significance.Keywords: Customer need type, Data mining techniques, Recommender system, Personalization, Mobile user.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21464389 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.
Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13824388 Evaluating the Nexus between Energy Demand and Economic Growth Using the VECM Approach: Case Study of Nigeria, China, and the United States
Authors: Rita U. Onolemhemhen, Saheed L. Bello, Akin P. Iwayemi
Abstract:
The effectiveness of energy demand policy depends on identifying the key drivers of energy demand both in the short-run and the long-run. This paper examines the influence of regional differences on the link between energy demand and other explanatory variables for Nigeria, China and USA using the Vector Error Correction Model (VECM) approach. This study employed annual time series data on energy consumption (ED), real gross domestic product (GDP) per capita (RGDP), real energy prices (P) and urbanization (N) for a thirty-six-year sample period. The utilized time-series data are sourced from World Bank’s World Development Indicators (WDI, 2016) and US Energy Information Administration (EIA). Results from the study, shows that all the independent variables (income, urbanization, and price) substantially affect the long-run energy consumption in Nigeria, USA and China, whereas, income has no significant effect on short-run energy demand in USA and Nigeria. In addition, the long-run effect of urbanization is relatively stronger in China. Urbanization is a key factor in energy demand, it therefore recommended that more attention should be given to the development of rural communities to reduce the inflow of migrants into urban communities which causes the increase in energy demand and energy excesses should be penalized while energy management should be incentivized.Keywords: Economic growth, energy demand, income, real GDP, urbanization, VECM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9884387 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques
Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian
Abstract:
Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.Keywords: Data mining, K-means, road traffic accidents, Waze, Weka.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12154386 Automatic Fluid-Structure Interaction Modeling and Analysis of Butterfly Valve Using Python Script
Authors: N. Guru Prasath, Sangjin Ma, Chang-Wan Kim
Abstract:
A butterfly valve is a quarter turn valve which is used to control the flow of a fluid through a section of pipe. Generally, butterfly valve is used in wide range of applications such as water distribution, sewage, oil and gas plants. In particular, butterfly valve with larger diameter finds its immense applications in hydro power plants to control the fluid flow. In-lieu with the constraints in cost and size to run laboratory setup, analysis of large diameter values will be mostly studied by computational method which is the best and inexpensive solution. For fluid and structural analysis, CFD and FEM software is used to perform large scale valve analyses, respectively. In order to perform above analysis in butterfly valve, the CAD model has to recreate and perform mesh in conventional software’s for various dimensions of valve. Therefore, its limitation is time consuming process. In-order to overcome that issue, python code was created to outcome complete pre-processing setup automatically in Salome software. Applying dimensions of the model clearly in the python code makes the running time comparatively lower and easier way to perform analysis of the valve. Hence, in this paper, an attempt was made to study the fluid-structure interaction (FSI) of butterfly valves by varying the valve angles and dimensions using python code in pre-processing software, and results are produced.
Keywords: Butterfly valve, fluid-structure interaction, automatic CFD analysis, flow coefficient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12974385 AI-Based Approaches for Task Offloading, Resource Allocation and Service Placement of IoT Applications: State of the Art
Authors: Fatima Z. Cherhabil, Mammar Sedrati, Sonia-Sabrina Bendib
Abstract:
In order to support the continued growth, critical latency of IoT applications and various obstacles of traditional data centers, Mobile Edge Computing (MEC) has emerged as a promising solution that extends the cloud data-processing and decision-making to edge devices. By adopting a MEC structure, IoT applications could be executed locally, on an edge server, different fog nodes or distant cloud data centers. However, we are often faced with wanting to optimize conflicting criteria such as minimizing energy consumption of limited local capabilities (in terms of CPU, RAM, storage, bandwidth) of mobile edge devices and trying to keep high performance (reducing response time, increasing throughput and service availability) at the same time. Achieving one goal may affect the other making Task Offloading (TO), Resource Allocation (RA) and Service Placement (SP) complex processes. It is a nontrivial multi-objective optimization problem to study the trade-off between conflicting criteria. The paper provides a survey on different TO, SP and RA recent Multi-Objective Optimization (MOO) approaches used in edge computing environments, particularly Artificial Intelligent (AI) ones, to satisfy various objectives, constraints and dynamic conditions related to IoT applications.
Keywords: Mobile Edge Computing, Multi-Objective Optimization, Artificial Intelligence Approaches, Task Offloading, Resource Allocation, Service Placement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5144384 Investigating the Transformer Operating Conditions for Evaluating the Dielectric Response
Authors: Jalal M. Abdallah
Abstract:
This paper presents an experimental investigation of transformer dielectric response and solid insulation water content. The dielectric response was carried out on the base of Hybrid Frequency Dielectric Spectroscopy and Polarization Current measurements method (FDS &PC). The calculation of the water content in paper is based on the water content in oil and the obtained equilibrium curves. A reference measurements were performed at equilibrium conditions for water content in oil and paper of transformer at different stable temperatures (25, 50, 60 and 70°C) to prepare references to evaluate the insulation behavior at the not equilibrium conditions. Some measurements performed at the different simulated normal working modes of transformer operation at the same temperature where the equilibrium conditions. The obtained results show that when transformer temperature is mach more than the its ambient temperature, the transformer temperature decreases immediately after disconnecting the transformer from the network and this temperature reduction influences the transformer insulation condition in the measuring process. In addition to the oil temperature at the near places to the sensors, the temperature uniformity in transformer which can be changed by a big change in the load of transformer before the measuring time will influence the result. The investigations have shown that the extremely influence of the time between disconnecting the transformer and beginning the measurements on the results. And the online monitoring for water content in paper measurements, on the basis of the oil water content on line monitoring and the obtained equilibrium curves. The measurements where performed continuously and for about 50 days without any disconnection in the prepared the adiabatic room.Keywords: Conductivity, Moisture, Temperature, Oil-paperinsulation, Online monitoring, Water content in oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26474383 Received Signal Strength Indicator Based Localization of Bluetooth Devices Using Trilateration: An Improved Method for the Visually Impaired People
Authors: Muhammad Irfan Aziz, Thomas Owens, Uzair Khaleeq uz Zaman
Abstract:
The instantaneous and spatial localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles, is the most demanding and challenging issue faced by the navigation systems today. Since Bluetooth cannot utilize techniques like Time Difference of Arrival (TDOA) and Time of Arrival (TOA), it uses received signal strength indicator (RSSI) to measure Receive Signal Strength (RSS). The measurements using RSSI can be improved significantly by improving the existing methodologies related to RSSI. Therefore, the current paper focuses on proposing an improved method using trilateration for localization of Bluetooth devices for visually impaired people. To validate the method, class 2 Bluetooth devices were used along with the development of a software. Experiments were then conducted to obtain surface plots that showed the signal interferences and other environmental effects. Finally, the results obtained show the surface plots for all Bluetooth modules used along with the strong and weak points depicted as per the color codes in red, yellow and blue. It was concluded that the suggested improved method of measuring RSS using trilateration helped to not only measure signal strength affectively but also highlighted how the signal strength can be influenced by atmospheric conditions such as noise, reflections, etc.
Keywords: Bluetooth, indoor/outdoor localization, received signal strength indicator, visually impaired.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7844382 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems
Authors: Rodolfo Lorbieski, Silvia Modesto Nassar
Abstract:
Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.Keywords: Stacking, multi-layers, ensemble, multi-class.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10944381 Comparative Quantitative Study on Learning Outcomes of Major Study Groups of an Information and Communication Technology Bachelor Educational Program
Authors: Kari Björn, Mikael Soini
Abstract:
Higher Education system reforms, especially Finnish system of Universities of Applied Sciences in 2014 are discussed. The new steering model is based on major legislative changes, output-oriented funding and open information. The governmental steering reform, especially the financial model and the resulting institutional level responses, such as a curriculum reforms are discussed, focusing especially in engineering programs. The paper is motivated by management need to establish objective steering-related performance indicators and to apply them consistently across all educational programs. The close relationship to governmental steering and funding model imply that internally derived indicators can be directly applied. Metropolia University of Applied Sciences (MUAS) as a case institution is briefly introduced, focusing on engineering education in Information and Communications Technology (ICT), and its related programs. The reform forced consolidation of previously separate smaller programs into fewer units of student application. New curriculum ICT students have a common first year before they apply for a Major. A framework of parallel and longitudinal comparisons is introduced and used across Majors in two campuses. The new externally introduced performance criteria are applied internally on ICT Majors using data ex-ante and ex-post of program merger. A comparative performance of the Majors after completion of joint first year is established, focusing on previously omitted Majors for completeness of analysis. Some new research questions resulting from transfer of Majors between campuses and quota setting are discussed. Practical orientation identifies best practices to share or targets needing most attention for improvement. This level of analysis is directly applicable at student group and teaching team level, where corrective actions are possible, when identified. The analysis is quantitative and the nature of the corrective actions are not discussed. Causal relationships and factor analysis are omitted, because campuses, their staff and various pedagogical implementation details contain still too many undetermined factors for our limited data. Such qualitative analysis is left for further research. Further study must, however, be guided by the relevance of the observations.
Keywords: Engineering education, integrated curriculum, learning outcomes, performance measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8934380 Evaluation of Mixed-Mode Stress Intensity Factor by Digital Image Correlation and Intelligent Hybrid Method
Authors: K. Machida, H. Yamada
Abstract:
Displacement measurement was conducted on compact normal and shear specimens made of acrylic homogeneous material subjected to mixed-mode loading by digital image correlation. The intelligent hybrid method proposed by Nishioka et al. was applied to the stress-strain analysis near the crack tip. The accuracy of stress-intensity factor at the free surface was discussed from the viewpoint of both the experiment and 3-D finite element analysis. The surface images before and after deformation were taken by a CMOS camera, and we developed the system which enabled the real time stress analysis based on digital image correlation and inverse problem analysis. The great portion of processing time of this system was spent on displacement analysis. Then, we tried improvement in speed of this portion. In the case of cracked body, it is also possible to evaluate fracture mechanics parameters such as the J integral, the strain energy release rate, and the stress-intensity factor of mixed-mode. The 9-points elliptic paraboloid approximation could not analyze the displacement of submicron order with high accuracy. The analysis accuracy of displacement was improved considerably by introducing the Newton-Raphson method in consideration of deformation of a subset. The stress-intensity factor was evaluated with high accuracy of less than 1% of the error.
Keywords: Digital image correlation, mixed mode, Newton-Raphson method, stress intensity factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17034379 Governance Commitment and Time Differences in Aspects of Sustainability Reporting in Nigerian Banks
Authors: Nwobu Obiamaka, Owolabi Akintola
Abstract:
This study examined the extent of statistical significant difference between the economic, environmental, governance and social aspects of sustainability reporting as a result of board committee on sustainability and time (year) of reporting for business organizations in the Nigerian banking sector. The years of reporting under consideration were 2010, 2011, 2012 and 2013. Content analysis methodology was employed through a reporting index used to score the amount of economic, environmental, governance and social indicators of sustainability reporting. The results of this study indicated that business organizations with board committee on sustainability had more indicators of sustainability reporting than those without board committees on sustainability issues. Also, sustainability reporting in 2013 was higher than that of prior years (2012, 2011 and 2010) for the economic, environmental and social indicators. The governance indicators of 2012 was highest compared to the other years (2013, 2011 and 2010) under consideration in this study. The implication of this finding is that business organizations that have board committees on sustainability are monitored by such boards to report more to their stakeholders. On the other hand, business organizations are appreciating the need to engage in sustainability reporting with each passing year. This could be due to the Central Bank of Nigeria (CBN) Sustainability Reporting framework that business organizations in the banking sector have to adhere to. When sustainability issues are monitored from the board of directors, business organizations are likely to increase and improve on their sustainability reporting.Keywords: Governance, organizations, reporting, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21004378 Nuclear Medical Image Treatment System Based On FPGA in Real Time
Authors: B. Mahmoud, M.H. Bedoui, R. Raychev, H. Essabbah
Abstract:
We present in this paper an acquisition and treatment system designed for semi-analog Gamma-camera. It consists of a nuclear medical Image Acquisition, Treatment and Display chain(IATD) ensuring the acquisition, the treatment of the signals(resulting from the Gamma-camera detection head) and the scintigraphic image construction in real time. This chain is composed by an analog treatment board and a digital treatment board. We describe the designed systems and the digital treatment algorithms in which we have improved the performance and the flexibility. The digital treatment algorithms are implemented in a specific reprogrammable circuit FPGA (Field Programmable Gate Array).interface for semi-analog cameras of Sopha Medical Vision(SMVi) by taking as example SOPHY DS7. The developed system consists of an Image Acquisition, Treatment and Display (IATD) ensuring the acquisition and the treatment of the signals resulting from the DH. The developed chain is formed by a treatment analog board and a digital treatment board designed around a DSP [2]. In this paper we have presented the architecture of a new version of our chain IATD in which the integration of the treatment algorithms is executed on an FPGA (Field Programmable Gate Array)
Keywords: Nuclear medical image, scintigraphic image, digitaltreatment, linearity, spectrometry, FPGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16764377 Automat Control of the Aircrafts- Lateral Movement using the Dynamic Inversion
Authors: Mihai Lungu, Romulus Lungu, Lucian Grigorie
Abstract:
The paper presents a new system for the automat control of the aircrafts- flight in lateral plane using the cinematic model and the dynamic inversion. Starting from the equations of the aircrafts- lateral movement, the authors use two axes systems and obtained a control law that cancels the lateral deviation of the flying objects from the runway line. This system makes the aircrafts- direction angle to follow the direction angle of the runway line. Simulations in Matlab/Simulink have been done for different aircraft-s initial points and direction angles. The inconvenience of this system is the long duration of the “transient regime". That is why this system can be used independently, but the results are not very good; thus, it can be a part (subsystem) of other systems. The main system that cancels the lateral deviation from the runway line is based on dynamic inversion and uses, as subsystem, the control system for the lateral movement using the cinematic model. Using complex Matlab/Simulink models, the authors obtained the time evolution of the direction angle and the time evolution of the aircraft lateral deviation with respect to the runway line, for different values of the initial direction angle and for different wind types. The system has a very good behavior for all initial direction angles and wind types.Keywords: Direction angle, Dynamic inversion, Lateraldeviation, Lateral movement
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19384376 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus
Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo
Abstract:
The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.
Keywords: Anomaly detection, digital twin, Generalised Additive Model, Power Consumption Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 501