Search results for: Information technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5743

Search results for: Information technology

553 Ranking of Inventory Policies Using Distance Based Approach Method

Authors: Gupta Amit, Kumar Ramesh, Tewari P. C.

Abstract:

Globalization is putting enormous pressure on the business organizations specially manufacturing one to rethink the supply chain in innovative manners. Inventory consumes major portion of total sale revenue. Effective and efficient inventory management plays a vital role for the successful functioning of any organization. Selection of inventory policy is one of the important purchasing activities. This paper focuses on selection and ranking of alternative inventory policies. A deterministic quantitative model based on Distance Based Approach (DBA) method has been developed for evaluation and ranking of inventory policies. We have employed this concept first time for this type of the selection problem. Four inventory policies economic order quantity (EOQ), just in time (JIT), vendor managed inventory (VMI) and monthly policy are considered. Improper selection could affect a company’s competitiveness in terms of the productivity of its facilities and quality of its products. The ranking of inventory policies is a multi-criteria problem. There is a need to first identify the selection criteria and then processes the information with reference to relative importance of attributes for comparison. Criteria values for each inventory policy can be obtained either analytically or by using a simulation technique or they are linguistic subjective judgments defined by fuzzy sets, like, for example, the values of criteria. A methodology is developed and applied to rank the inventory policies.

Keywords: Inventory Policy, Ranking, DBA, Selection criteria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830
552 Perceptions of Educators on the Learners’ Youngest Age for the Introduction of ICTs in Schools: A Personality Theory Approach

Authors: K. E. Oyetade, S. D. Eyono Obono

Abstract:

Age ratings are very helpful in providing parents with relevant information for the purchase and use of digital technologies by the children; this is why the non-definition of age ratings for the use of ICTs by children in schools is a major concern; and this problem serves as a motivation for this study whose aim is to examine the factors affecting the perceptions of educators on the learners’ youngest age for the introduction of ICTs in schools. This aim is achieved through two types of research objectives: the identification and design of theories and models on age ratings, and the empirical testing of such theories and models in a survey of educators from the Camperdown district of the South African KwaZulu-Natal province. A questionnaire is used for the collection of the data of this survey whose validity and reliability is checked in SPSS prior to its descriptive and correlative quantitative analysis. The main hypothesis supporting this research is the association between the demographics of educators, their personality, and their perceptions on the learners’ youngest age for the introduction of ICTs in schools; as claimed by existing research; except that the present study looks at personality from three dimensions: self-actualized personalities, fully functioning personalities, and healthy personalities. This hypothesis was fully confirmed by the empirical study conducted by this research except for the demographic factor where only the educators’ grade or class was found to be associated with the personality of educators.

Keywords: Age ratings, Educators, E-learning, Personality Theories.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
551 GIS-based Non-point Sources of Pollution Simulation in Cameron Highlands, Malaysia

Authors: M. Eisakhani, A. Pauzi, O. Karim, A. Malakahmad, S.R. Mohamed Kutty, M. H. Isa

Abstract:

Cameron Highlands is a mountainous area subjected to torrential tropical showers. It extracts 5.8 million liters of water per day for drinking supply from its rivers at several intake points. The water quality of rivers in Cameron Highlands, however, has deteriorated significantly due to land clearing for agriculture, excessive usage of pesticides and fertilizers as well as construction activities in rapidly developing urban areas. On the other hand, these pollution sources known as non-point pollution sources are diverse and hard to identify and therefore they are difficult to estimate. Hence, Geographical Information Systems (GIS) was used to provide an extensive approach to evaluate landuse and other mapping characteristics to explain the spatial distribution of non-point sources of contamination in Cameron Highlands. The method to assess pollution sources has been developed by using Cameron Highlands Master Plan (2006-2010) for integrating GIS, databases, as well as pollution loads in the area of study. The results show highest annual runoff is created by forest, 3.56 × 108 m3/yr followed by urban development, 1.46 × 108 m3/yr. Furthermore, urban development causes highest BOD load (1.31 × 106 kgBOD/yr) while agricultural activities and forest contribute the highest annual loads for phosphorus (6.91 × 104 kgP/yr) and nitrogen (2.50 × 105 kgN/yr), respectively. Therefore, best management practices (BMPs) are suggested to be applied to reduce pollution level in the area.

Keywords: Cameron Highlands, Land use, Non-point Sources of Pollution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2880
550 Application of Pattern Search Method to Power System Security Constrained Economic Dispatch

Authors: A. K. Al-Othman, K. M. EL-Nagger

Abstract:

Direct search methods are evolutionary algorithms used to solve optimization problems. (DS) methods do not require any information about the gradient of the objective function at hand while searching for an optimum solution. One of such methods is Pattern Search (PS) algorithm. This paper presents a new approach based on a constrained pattern search algorithm to solve a security constrained power system economic dispatch problem (SCED). Operation of power systems demands a high degree of security to keep the system satisfactorily operating when subjected to disturbances, while and at the same time it is required to pay attention to the economic aspects. Pattern recognition technique is used first to assess dynamic security. Linear classifiers that determine the stability of electric power system are presented and added to other system stability and operational constraints. The problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Pattern search method is then applied to solve the constrained optimization formulation. In particular, the method is tested using one system. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that pattern search (PS) is very applicable for solving security constrained power system economic dispatch problem (SCED).

Keywords: Security Constrained Economic Dispatch, Direct Search method, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2211
549 A Dynamic Composition of an Adaptive Course

Authors: S. Chiali, Z.Eberrichi, M.Malki

Abstract:

The number of framework conceived for e-learning constantly increase, unfortunately the creators of learning materials and educational institutions engaged in e-formation adopt a “proprietor" approach, where the developed products (courses, activities, exercises, etc.) can be exploited only in the framework where they were conceived, their uses in the other learning environments requires a greedy adaptation in terms of time and effort. Each one proposes courses whose organization, contents, modes of interaction and presentations are unique for all learners, unfortunately the latter are heterogeneous and are not interested by the same information, but only by services or documents adapted to their needs. Currently the new tendency for the framework conceived for e-learning, is the interoperability of learning materials, several standards exist (DCMI (Dublin Core Metadata Initiative)[2], LOM (Learning Objects Meta data)[1], SCORM (Shareable Content Object Reference Model)[6][7][8], ARIADNE (Alliance of Remote Instructional Authoring and Distribution Networks for Europe)[9], CANCORE (Canadian Core Learning Resource Metadata Application Profiles)[3]), they converge all to the idea of learning objects. They are also interested in the adaptation of the learning materials according to the learners- profile. This article proposes an approach for the composition of courses adapted to the various profiles (knowledge, preferences, objectives) of learners, based on two ontologies (domain to teach and educational) and the learning objects.

Keywords: Adaptive educational hypermedia systems (AEHS), E-learning, Learner's model, Learning objects, Metadata, Ontology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
548 Production of Apricot Vinegar Using an Isolated Acetobacter Strain from Iranian Apricot

Authors: Keivan Beheshti Maal, Rasoul Shafiei, Noushin Kabiri

Abstract:

Vinegar or sour wine is a product of alcoholic and subsequent acetous fermentation of sugary precursors derived from several fruits or starchy substrates. This delicious food additive and supplement contains not less than 4 grams of acetic acid in 100 cubic centimeters at 20°C. Among the large number of bacteria that are able to produce acetic acid, only few genera are used in vinegar industry most significant of which are Acetobacter and Gluconobacter. In this research we isolated and identified an Acetobacter strain from Iranian apricot, a very delicious and sensitive summer fruit to decay, we gathered from fruit's stores in Isfahan, Iran. The main culture media we used were Carr, GYC, Frateur and an industrial medium for vinegar production. We isolated this strain using a novel miniature fermentor we made at Pars Yeema Biotechnologists Co., Isfahan Science and Technology Town (ISTT), Isfahan, Iran. The microscopic examinations of isolated strain from Iranian apricot showed gram negative rods to cocobacilli. Their catalase reaction was positive and oxidase reaction was negative and could ferment ethanol to acetic acid. Also it showed an acceptable growth in 5%, 7% and 9% ethanol concentrations at 30°C using modified Carr media after 24, 48 and 96 hours incubation respectively. According to its tolerance against high concentrations of ethanol after four days incubation and its high acetic acid production, 8.53%, after 144 hours, this strain could be considered as a suitable industrial strain for a production of a new type of vinegar, apricot vinegar, with a new and delicious taste. In conclusion this is the first report of isolation and identification of an Acetobacter strain from Iranian apricot with a very good tolerance against high ethanol concentrations as well as high acetic acid productivity in an acceptable incubation period of time industrially. This strain could be used in vinegar industry to convert apricot spoilage to a beneficiary product and mentioned characteristics have made it as an amenable strain in food and agricultural biotechnology.

Keywords: Acetic Acid Bacteria, Acetobacter, Fermentation, Food and Agricultural Biotechnology, Iranian Apricot, Vinegar.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3971
547 Data Mining to Capture User-Experience: A Case Study in Notebook Product Appearance Design

Authors: Rhoann Kerh, Chen-Fu Chien, Kuo-Yi Lin

Abstract:

In the era of rapidly increasing notebook market, consumer electronics manufacturers are facing a highly dynamic and competitive environment. In particular, the product appearance is the first part for user to distinguish the product from the product of other brands. Notebook product should differ in its appearance to engage users and contribute to the user experience (UX). The UX evaluates various product concepts to find the design for user needs; in addition, help the designer to further understand the product appearance preference of different market segment. However, few studies have been done for exploring the relationship between consumer background and the reaction of product appearance. This study aims to propose a data mining framework to capture the user’s information and the important relation between product appearance factors. The proposed framework consists of problem definition and structuring, data preparation, rules generation, and results evaluation and interpretation. An empirical study has been done in Taiwan that recruited 168 subjects from different background to experience the appearance performance of 11 different portable computers. The results assist the designers to develop product strategies based on the characteristics of consumers and the product concept that related to the UX, which help to launch the products to the right customers and increase the market shares. The results have shown the practical feasibility of the proposed framework.

Keywords: Consumers Decision Making, Product Design, Rough Set Theory, User Experience.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3517
546 Conceptual Design of the TransAtlantic as a Research Platform for the Development of “Green” Aircraft Technologies

Authors: Victor Maldonado

Abstract:

Recent concerns of the growing impact of aviation on climate change has prompted the emergence of a field referred to as Sustainable or “Green” Aviation dedicated to mitigating the harmful impact of aviation related CO2 emissions and noise pollution on the environment. In the current paper, a unique “green” business jet aircraft called the TransAtlantic was designed (using analytical formulation common in conceptual design) in order to show the feasibility for transatlantic passenger air travel with an aircraft weighing less than 10,000 pounds takeoff weight. Such an advance in fuel efficiency will require development and integration of advanced and emerging aerospace technologies. The TransAtlantic design is intended to serve as a research platform for the development of technologies such as active flow control. Recent advances in the field of active flow control and how this technology can be integrated on a sub-scale flight demonstrator are discussed in this paper. Flow control is a technique to modify the behavior of coherent structures in wall-bounded flows (over aerodynamic surfaces such as wings and turbine nozzles) resulting in improved aerodynamic cruise and flight control efficiency. One of the key challenges to application in manned aircraft is development of a robust high-momentum actuator that can penetrate the boundary layer flowing over aerodynamic surfaces. These deficiencies may be overcome in the current development and testing of a novel electromagnetic synthetic jet actuator which replaces piezoelectric materials as the driving diaphragm. One of the overarching goals of the TranAtlantic research platform include fostering national and international collaboration to demonstrate (in numerical and experimental models) reduced CO2/ noise pollution via development and integration of technologies and methodologies in design optimization, fluid dynamics, structures/ composites, propulsion, and controls.

Keywords: Aircraft Design, Sustainable “Green” Aviation, Active Flow Control, Aerodynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2535
545 Different in Factors of the Distributor Selection for Food and Non-Food OTOP Entrepreneur in Thailand

Authors: Phutthiwat Waiyawuththanapoom

Abstract:

This study has only one objective which is to identify the different in factors of choosing the distributor for food and non-food OTOP entrepreneur in Thailand. In this research, the types of OTOP product will be divided into two groups which are food and non-food. The sample for the food type OTOP product was the processed fruit and vegetable from Nakorn Pathom province and the sample for the non-food type OTOP product was the court doll from Ang Thong province. The research was divided into 3 parts which were a study of the distribution pattern and how to choose the distributor of the food type OTOP product, a study of the distribution pattern and how to choose the distributor of the non-food type OTOP product and a comparison between 2 types of products to find the differentiation in the factor of choosing distributor. The data and information was collected by using the interview. The populations in the research were 5 producers of the processed fruit and vegetable from Nakorn Pathom province and 5 producers of the court doll from Ang Thong province. The significant factor in choosing the distributor of the food type OTOP product is the material handling efficiency and on-time delivery but for the non-food type OTOP product is focused on the channel of distribution and cost of the distributor.

Keywords: Distributor, OTOP, Food and Non-Food, Selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
544 Mining Network Data for Intrusion Detection through Naïve Bayesian with Clustering

Authors: Dewan Md. Farid, Nouria Harbi, Suman Ahmmed, Md. Zahidur Rahman, Chowdhury Mofizur Rahman

Abstract:

Network security attacks are the violation of information security policy that received much attention to the computational intelligence society in the last decades. Data mining has become a very useful technique for detecting network intrusions by extracting useful knowledge from large number of network data or logs. Naïve Bayesian classifier is one of the most popular data mining algorithm for classification, which provides an optimal way to predict the class of an unknown example. It has been tested that one set of probability derived from data is not good enough to have good classification rate. In this paper, we proposed a new learning algorithm for mining network logs to detect network intrusions through naïve Bayesian classifier, which first clusters the network logs into several groups based on similarity of logs, and then calculates the prior and conditional probabilities for each group of logs. For classifying a new log, the algorithm checks in which cluster the log belongs and then use that cluster-s probability set to classify the new log. We tested the performance of our proposed algorithm by employing KDD99 benchmark network intrusion detection dataset, and the experimental results proved that it improves detection rates as well as reduces false positives for different types of network intrusions.

Keywords: Clustering, detection rate, false positive, naïveBayesian classifier, network intrusion detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5539
543 An Investigation into the Potential of Industrial Low Grade Heat in Membrane Distillation for Freshwater Production

Authors: Yehia Manawi, Ahmad Kayvani Fard

Abstract:

Membrane distillation is an emerging technology which has been used to produce freshwater and purify different types of aqueous mixtures. Qatar is an arid country where almost 100% of its freshwater demand is supplied through the energy-intensive thermal desalination process. The country’s need for water has reached an all-time high which stipulates finding an alternative way to augment freshwater without adding any drastic affect to the environment. The objective of this paper was to investigate the potential of using the industrial low grade waste heat to produce freshwater using membrane distillation. The main part of this work was conducting a heat audit on selected Qatari chemical industries to estimate the amounts of freshwater produced if such industrial waste heat were to be recovered. By the end of this work, the main objective was met and the heat audit conducted on the Qatari chemical industries enabled us to estimate both the amounts of waste heat which can be potentially recovered in addition to the amounts of freshwater which can be produced if such waste heat were to be recovered.

By the end, the heat audit showed that around 605 Mega Watts of waste heat can be recovered from the studied Qatari chemical industries which resulted in a total daily production of 5078.7 cubic meter of freshwater.

This water can be used in a wide variety of applications such as human consumption or industry. The amount of produced freshwater may look small when compared to that produced through thermal desalination plants; however, one must bear in mind that this water comes from waste and can be used to supply water for small cities or remote areas which are not connected to the water grid. The idea of producing freshwater from the two widely-available wastes (thermal rejected brine and waste heat) seems promising as less environmental and economic impacts will be associated with freshwater production which may in the near future augment the conventional way of producing freshwater currently being thermal desalination. This work has shown that low grade waste heat in the chemical industries in Qatar and perhaps the rest of the world can contribute to additional production of freshwater using membrane distillation without significantly adding to the environmental impact.

Keywords: Membrane distillation, desalination, heat recovery, environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966
542 A Holistic Conceptual Measurement Framework for Assessing the Effectiveness and Viability of an Academic Program

Authors: Munir Majdalawieh, Adam Marks

Abstract:

In today’s very competitive higher education industry (HEI), HEIs are faced with the primary concern of developing, deploying, and sustaining high quality academic programs. Today, the HEI has well-established accreditation systems endorsed by a country’s legislation and institutions. The accreditation system is an educational pathway focused on the criteria and processes for evaluating educational programs. Although many aspects of the accreditation process highlight both the past and the present (prove), the “program review” assessment is "forward-looking assessment" (improve) and thus transforms the process into a continuing assessment activity rather than a periodic event. The purpose of this study is to propose a conceptual measurement framework for program review to be used by HEIs to undertake a robust and targeted approach to proactively and continuously review their academic programs to evaluate its practicality and effectiveness as well as to improve the education of the students. The proposed framework consists of two main components: program review principles and the program review measurement matrix.

Keywords: Academic program, program review principles, curriculum development, accreditation, evaluation, assessment, review measurement matrix, program review process, information technologies supporting learning, learning/teaching methodologies and assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1091
541 A Comparative Study on the Performance of Viscous and Friction Dampers under Seismic Excitation

Authors: Apetsi K. Ampiah, Zhao Xin

Abstract:

Earthquakes over the years have been known to cause devastating damage on buildings and induced huge loss on human life and properties. It is for this reason that engineers have devised means of protecting buildings and thus protecting human life. Since the invention of devices such as the viscous and friction dampers, scientists/researchers have been able to incorporate these devices into buildings and other engineering structures. The viscous damper is a hydraulic device which dissipates the seismic forces by pushing fluid through an orifice, producing a damping pressure which creates a force. In the friction damper, the force is mainly resisted by converting the kinetic energy into heat by friction. Devices such as viscous and friction dampers are able to absorb almost all the earthquake energy, allowing the structure to remain undamaged (or with some amount of damage) and ready for immediate reuse (with some repair works). Comparing these two devices presents the engineer with adequate information on the merits and demerits of these devices and in which circumstances their use would be highly favorable. This paper examines the performance of both viscous and friction dampers under different ground motions. A two-storey frame installed with both devices under investigation are modeled in commercial computer software and analyzed under different ground motions. The results of the performance of the structure are then tabulated and compared. Also included in this study is the ease of installation and maintenance of these devices.

Keywords: Friction damper, seismic, slip load, viscous damper.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 718
540 Evaluation of Exerting Force on the Heating Surface Due to Bubble Ebullition in Subcooled Flow Boiling

Authors: M. R. Nematollahi

Abstract:

Vibration characteristics of subcooled flow boiling on thin and long structures such as a heating rod were recently investigated by the author. The results show that the intensity of the subcooled boiling-induced vibration (SBIV) was influenced strongly by the conditions of the subcooling temperature, linear power density and flow velocity. Implosive bubble formation and collapse are the main nature of subcooled boiling, and their behaviors are the only sources to originate from SBIV. Therefore, in order to explain the phenomenon of SBIV, it is essential to obtain reliable information about bubble behavior in subcooled boiling conditions. This was investigated at different conditions of coolant subcooling temperatures of 25 to 75°C, coolant flow velocities of 0.16 to 0.53m/s, and linear power densities of 100 to 600 W/cm. High speed photography at 13,500 frames per second was performed at these conditions. The results show that even at the highest subcooling condition, the absolute majority of bubbles collapse very close to the surface after detaching from the heating surface. Based on these observations, a simple model of surface tension and momentum change is introduced to offer a rough quantitative estimate of the force exerted on the heating surface during the bubble ebullition. The formation of a typical bubble in subcooled boiling is predicted to exert an excitation force in the order of 10-4 N.

Keywords: Subcooled boiling, vibration mechanism, bubble behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544
539 The Wider Benefits of Negotiations: Austrian Perspective on Educational Leadership as a ‘Power Game’ for Trade Unions

Authors: Rudolf Egger

Abstract:

This paper explores the relationships between the basic learning processes of leading trade union workers and their methods for coping with the changes in the life-courses of societies today. It will discuss the fragile discourse on lifelong learning in trade unions and the “production of self-techniques” to get in touch with the new economic forms. On the basis of an empirical project, different processes of the socialization of leading trade union workers will be analysed to discover the consequences of the lifelong learning discourse. The results show what competences they need to develop for the “wider benefits of negotiations”. The main challenge remains to make visible how deeply intertwined trade union learning and education are with development in an ongoing dynamic economic process, rather than a quick-fix injection of skills and information. There is a complex relationship existing between the three ‘partners’, work, learning and society forming. The author suggests that contemporary trade unions could be trendsetters who make their own learning agendas by drawing less on formal education and more on informal and non-formal learning contexts. This is in parallel with growing political and scientific consciousness of the need to arrive at new educational/vocational policies and practices.

Keywords: Lifelong learning, Trade unions, Non-formal learning, Educational/vocational policies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1232
538 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates.On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: Aggregate data, combined-level data, Individual patient data, meta analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
537 Towards an Enhanced Stochastic Simulation Model for Risk Analysis in Highway Construction

Authors: Anshu Manik, William G. Buttlar, Kasthurirangan Gopalakrishnan

Abstract:

Over the years, there is a growing trend towards quality-based specifications in highway construction. In many Quality Control/Quality Assurance (QC/QA) specifications, the contractor is primarily responsible for quality control of the process, whereas the highway agency is responsible for testing the acceptance of the product. A cooperative investigation was conducted in Illinois over several years to develop a prototype End-Result Specification (ERS) for asphalt pavement construction. The final characteristics of the product are stipulated in the ERS and the contractor is given considerable freedom in achieving those characteristics. The risk for the contractor or agency depends on how the acceptance limits and processes are specified. Stochastic simulation models are very useful in estimating and analyzing payment risk in ERS systems and these form an integral part of the Illinois-s prototype ERS system. This paper describes the development of an innovative methodology to estimate the variability components in in-situ density, air voids and asphalt content data from ERS projects. The information gained from this would be crucial in simulating these ERS projects for estimation and analysis of payment risks associated with asphalt pavement construction. However, these methods require at least two parties to conduct tests on all the split samples obtained according to the sampling scheme prescribed in present ERS implemented in Illinois.

Keywords: Asphalt Pavement, Risk Analysis, StochasticSimulation, QC/QA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
536 Creating Smart and Healthy Cities by Exploring the Potentials of Emerging Technologies and Social Innovation for Urban Efficiency: Lessons from the Innovative City of Boston

Authors: Mohammed Agbali, Claudia Trillo, Yusuf Arayici, Terrence Fernando

Abstract:

The wide-spread adoption of the Smart City concept has introduced a new era of computing paradigm with opportunities for city administrators and stakeholders in various sectors to re-think the concept of urbanization and development of healthy cities. With the world population rapidly becoming urban-centric especially amongst the emerging economies, social innovation will assist greatly in deploying emerging technologies to address the development challenges in core sectors of the future cities. In this context, sustainable health-care delivery and improved quality of life of the people is considered at the heart of the healthy city agenda. This paper examines the Boston innovation landscape from the perspective of smart services and innovation ecosystem for sustainable development, especially in transportation and healthcare. It investigates the policy implementation process of the Healthy City agenda and eHealth economy innovation based on the experience of Massachusetts’s City of Boston initiatives. For this purpose, three emerging areas are emphasized, namely the eHealth concept, the innovation hubs, and the emerging technologies that drive innovation. This was carried out through empirical analysis on results of public sector and industry-wide interviews/survey about Boston’s current initiatives and the enabling environment. The paper highlights few potential research directions for service integration and social innovation for deploying emerging technologies in the healthy city agenda. The study therefore suggests the need to prioritize social innovation as an overarching strategy to build sustainable Smart Cities in order to avoid technology lock-in. Finally, it concludes that the Boston example of innovation economy is unique in view of the existing platforms for innovation and proper understanding of its dynamics, which is imperative in building smart and healthy cities where quality of life of the citizenry can be improved.

Keywords: Smart city, social innovation, eHealth, innovation hubs, emerging technologies, equitable healthcare, healthy cities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
535 Investigating the Effect of Uncertainty on a LP Model of a Petrochemical Complex: Stability Analysis Approach

Authors: Abdallah Al-Shammari

Abstract:

This study discusses the effect of uncertainty on production levels of a petrochemical complex. Uncertainly or variations in some model parameters, such as prices, supply and demand of materials, can affect the optimality or the efficiency of any chemical process. For any petrochemical complex with many plants, there are many sources of uncertainty and frequent variations which require more attention. Many optimization approaches are proposed in the literature to incorporate uncertainty within the model in order to obtain a robust solution. In this work, a stability analysis approach is applied to a deterministic LP model of a petrochemical complex consists of ten plants to investigate the effect of such variations on the obtained optimal production levels. The proposed approach can determinate the allowable variation ranges of some parameters, mainly objective or RHS coefficients, before the system lose its optimality. Parameters with relatively narrow range of variations, i.e. stability limits, are classified as sensitive parameters or constraints that need accurate estimate or intensive monitoring. These stability limits offer easy-to-use information to the decision maker and help in understanding the interaction between some model parameters and deciding when the system need to be re-optimize. The study shows that maximum production of ethylene and the prices of intermediate products are the most sensitive factors that affect the stability of the optimum solution

Keywords: Linear programming, Petrochemicals, stability analysis, uncertainty

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958
534 Investigating the Dynamic Response of the Ballast

Authors: Osama Brinji, Wing Kong Chiu, Graham Tew

Abstract:

Understanding the stability of rail ballast is one of the most important aspects in the railways. An unstable track may cause some issues such as unnecessary vibration and ultimately loss of track quality. The track foundation plays an important role in the stabilization of the railway. The dynamic response of rail ballast in the vicinity of the rail sleeper can affect the stability of the rail track and this has not been studied in detail. A review of literature showed that most of the works focused on the area under the concrete sleeper. Although there are some theories about the shear (longitudinal) effect of the rail ballast, these have not properly been studied and hence are not well understood. The stability of a rail track will depend on the compactness of the ballast in its vicinity. This paper will try to determine the dynamic response of the ballast to identify its resonant behaviour. This preliminary research is one of several studies that examine the vibration response of the granular materials. The main aim is to use this information for future design of sleepers to ensure that any dynamic response of the sleeper will not compromise the state of compactness of the ballast. This paper will report on the dependence of damping and the natural frequency of the ballast as a function of depth and distance from the point of excitation introduced through a concrete block. The concrete block is used to simulate a sleeper and the ballast is simulated with gravel. In spite of these approximations, the results presented in the paper will show an agreement with theories and the assumptions that are used in study the mechanical behaviour of the rail ballast.

Keywords: Ballast, dynamic response, sleeper, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
533 Recognizing an Individual, Their Topic of Conversation, and Cultural Background from 3D Body Movement

Authors: Gheida J. Shahrour, Martin J. Russell

Abstract:

The 3D body movement signals captured during human-human conversation include clues not only to the content of people’s communication but also to their culture and personality. This paper is concerned with automatic extraction of this information from body movement signals. For the purpose of this research, we collected a novel corpus from 27 subjects, arranged them into groups according to their culture. We arranged each group into pairs and each pair communicated with each other about different topics. A state-of-art recognition system is applied to the problems of person, culture, and topic recognition. We borrowed modeling, classification, and normalization techniques from speech recognition. We used Gaussian Mixture Modeling (GMM) as the main technique for building our three systems, obtaining 77.78%, 55.47%, and 39.06% from the person, culture, and topic recognition systems respectively. In addition, we combined the above GMM systems with Support Vector Machines (SVM) to obtain 85.42%, 62.50%, and 40.63% accuracy for person, culture, and topic recognition respectively. Although direct comparison among these three recognition systems is difficult, it seems that our person recognition system performs best for both GMM and GMM-SVM, suggesting that intersubject differences (i.e. subject’s personality traits) are a major source of variation. When removing these traits from culture and topic recognition systems using the Nuisance Attribute Projection (NAP) and the Intersession Variability Compensation (ISVC) techniques, we obtained 73.44% and 46.09% accuracy from culture and topic recognition systems respectively.

Keywords: Person Recognition, Topic Recognition, Culture Recognition, 3D Body Movement Signals, Variability Compensation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2176
532 Measuring the Effect of Ventilation on Cooking in Indoor Air Quality by Low-Cost Air Sensors

Authors: Andres Gonzalez, Adam Boies, Jacob Swanson, David Kittelson

Abstract:

The concern of the indoor air quality (IAQ) has been increasing due to its risk to human health. The smoking, sweeping, and stove and stovetop use are the activities that have a major contribution to the indoor air pollution. Outdoor air pollution also affects IAQ. The most important factors over IAQ from cooking activities are the materials, fuels, foods, and ventilation. The low-cost, mobile air quality monitoring (LCMAQM) sensors, is reachable technology to assess the IAQ. This is because of the lower cost of LCMAQM compared to conventional instruments. The IAQ was assessed, using LCMAQM, during cooking activities in a University of Minnesota graduate-housing evaluating different ventilation systems. The gases measured are carbon monoxide (CO) and carbon dioxide (CO2). The particles measured are particle matter (PM) 2.5 micrometer (µm) and lung deposited surface area (LDSA). The measurements are being conducted during April 2019 in Como Student Community Cooperative (CSCC) that is a graduate housing at the University of Minnesota. The measurements are conducted using an electric stove for cooking. The amount and type of food and oil using for cooking are the same for each measurement. There are six measurements: two experiments measure air quality without any ventilation, two using an extractor as mechanical ventilation, and two using the extractor and windows open as mechanical and natural ventilation. 3The results of experiments show that natural ventilation is most efficient system to control particles and CO2. The natural ventilation reduces the concentration in 79% for LDSA and 55% for PM2.5, compared to the no ventilation. In the same way, CO2 reduces its concentration in 35%. A well-mixed vessel model was implemented to assess particle the formation and decay rates. Removal rates by the extractor were significantly higher for LDSA, which is dominated by smaller particles, than for PM2.5, but in both cases much lower compared to the natural ventilation. There was significant day to day variation in particle concentrations under nominally identical conditions. This may be related to the fat content of the food. Further research is needed to assess the impact of the fat in food on particle generations.

Keywords: Cooking, indoor air quality, low-cost sensor, ventilation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1020
531 Simple Agents Benefit Only from Simple Brains

Authors: Valeri A. Makarov, Nazareth P. Castellanos, Manuel G. Velarde

Abstract:

In order to answer the general question: “What does a simple agent with a limited life-time require for constructing a useful representation of the environment?" we propose a robot platform including the simplest probabilistic sensory and motor layers. Then we use the platform as a test-bed for evaluation of the navigational capabilities of the robot with different “brains". We claim that a protocognitive behavior is not a consequence of highly sophisticated sensory–motor organs but instead emerges through an increment of the internal complexity and reutilization of the minimal sensory information. We show that the most fundamental robot element, the short-time memory, is essential in obstacle avoidance. However, in the simplest conditions of no obstacles the straightforward memoryless robot is usually superior. We also demonstrate how a low level action planning, involving essentially nonlinear dynamics, provides a considerable gain to the robot performance dynamically changing the robot strategy. Still, however, for very short life time the brainless robot is superior. Accordingly we suggest that small organisms (or agents) with short life-time does not require complex brains and even can benefit from simple brain-like (reflex) structures. To some extend this may mean that controlling blocks of modern robots are too complicated comparative to their life-time and mechanical abilities.

Keywords: Neural network, probabilistic control, robot navigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1431
530 A Renovated Cook's Distance Based On The Buckley-James Estimate In Censored Regression

Authors: Nazrina Aziz, Dong Q. Wang

Abstract:

There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.

Keywords: Buckley-James estimators, censored regression, censored data, diagnostic analysis, product-limit estimator, renovated Cook's Distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
529 Investigation in Physically-Chemical Parameters of in Latvia Harvested Conventional and Organic Triticale Grains

Authors: Solvita Kalnina, Tatjana Rakcejeva, Daiga Kunkulberga, Anda Linina

Abstract:

Triticale is a manmade hybrid of wheat and rye that carries the A and B genome of durum wheat and the R genome of rye. In the scientific literature information about in Latvia harvested organic and conventional triticale grain physically-chemical composition was not found in general. Therefore, the main purpose of the current research was to investigate physically-chemical parameters of in Latvia harvested organic and convectional triticale grains. The research was accomplished on in Year 2012 from State Priekuli Plant Breeding Institute (Latvia) harvested organic and conventional triticale grains: “Dinaro”, “9403-97”, “9405-23” and “9402-3”. In the present research significant differences in chemical composition between organic and conventional triticale grains harvested in Latvia was found. It is necessary to mention that higher 1000 grain weight, bulk density and gluten index was obtained for conventional and organic triticale grain variety “9403-97”. However higher falling number, gluten and protein content was obtained for triticale grain variety “9405-23”.

Keywords: Physically-chemical parameters, technological properties, triticale grains.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2687
528 A Group Setting of IED in Microgrid Protection Management System

Authors: Jyh-Cherng Gu, Ming-Ta Yang, Chao-Fong Yan, Hsin-Yung Chung, Yung-Ruei Chang, Yih-Der Lee, Chen-Min Chan, Chia-Hao Hsu

Abstract:

There are a number of Distributed Generations (DGs) installed in microgrid, which may have diverse path and direction of power flow or fault current. The overcurrent protection scheme for the traditional radial type distribution system will no longer meet the needs of microgrid protection. Integrating the Intelligent Electronic Device (IED) and a Supervisory Control and Data Acquisition (SCADA) with IEC 61850 communication protocol, the paper proposes a Microgrid Protection Management System (MPMS) to protect power system from the fault. In the proposed method, the MPMS performs logic programming of each IED to coordinate their tripping sequence. The GOOSE message defined in IEC 61850 is used as the transmission information medium among IEDs. Moreover, to cope with the difference in fault current of microgrid between grid-connected mode and islanded mode, the proposed MPMS applies the group setting feature of IED to protect system and robust adaptability. Once the microgrid topology varies, the MPMS will recalculate the fault current and update the group setting of IED. Provided there is a fault, IEDs will isolate the fault at once. Finally, the Matlab/Simulink and Elipse Power Studio software are used to simulate and demonstrate the feasibility of the proposed method.

Keywords: IEC 61850, IED, Group Setting, Microgrid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2272
527 Effect of Transmission Codes on Hybrid SC/MRC Diversity Reception MQAM system over Rayleigh Fading Channels

Authors: J.S. Ubhi, M.S. Patterh, T.S. Kamal

Abstract:

In this paper, the effect of transmission codes on the performance of coherent square M-ary quadrature amplitude modulation (CSMQAM) under hybrid selection/maximal-ratio combining (H-S/MRC) diversity is analysed. The fading channels are modeled as frequency non-selective slow independent and identically distributed Rayleigh fading channels corrupted by additive white Gaussian noise (AWGN). The results for coded MQAM are computed numerically for the case of (24,12) extended Golay code and compared with uncoded MQAM under H-S/MRC diversity by plotting error probabilities versus average signal to noise ratio (SNR) for various values L and N in order to examine the improvement in the performance of the digital communications system as the number of selected diversity branches is increased. The results for no diversity, conventional SC and Lth order MRC schemes are also plotted for comparison. Closed form analytical results derived in this paper are sufficiently simple and therefore can be computed numerically without any approximations. The analytical results presented in this paper are expected to provide useful information needed for design and analysis of digital communication systems over wireless fading channels.

Keywords: Error probability, diversity reception, Rayleigh fading channels, wireless digital communications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
526 Analysis of Precipitation Time Series of Urban Centers of Northeastern Brazil using Wavelet Transform

Authors: Celso A. G. Santos, Paula K. M. M. Freire

Abstract:

The urban centers within northeastern Brazil are mainly influenced by the intense rainfalls, which can occur after long periods of drought, when flood events can be observed during such events. Thus, this paper aims to study the rainfall frequencies in such region through the wavelet transform. An application of wavelet analysis is done with long time series of the total monthly rainfall amount at the capital cities of northeastern Brazil. The main frequency components in the time series are studied by the global wavelet spectrum and the modulation in separated periodicity bands were done in order to extract additional information, e.g., the 8 and 16 months band was examined by an average of all scales, giving a measure of the average annual variance versus time, where the periods with low or high variance could be identified. The important increases were identified in the average variance for some periods, e.g. 1947 to 1952 at Teresina city, which can be considered as high wet periods. Although, the precipitation in those sites showed similar global wavelet spectra, the wavelet spectra revealed particular features. This study can be considered an important tool for time series analysis, which can help the studies concerning flood control, mainly when they are applied together with rainfall-runoff simulations.

Keywords: rainfall data, urban center, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2453
525 Socio-Technical Systems: Transforming Theory into Practice

Authors: L. Ngowi, N. H. Mvungi

Abstract:

This paper critically examines the evolution of socio-technical systems theory, its practices, and challenges in system design and development. It examines concepts put forward by researchers focusing on the application of the theory in software engineering. There are various methods developed that use socio-technical concepts based on systems engineering without remarkable success. The main constraint is the large amount of data and inefficient techniques used in the application of the concepts in system engineering for developing time-bound systems and within a limited/controlled budget. This paper critically examines each of the methods, highlight bottlenecks and suggest the way forward. Since socio-technical systems theory only explains what to do, but not how doing it, hence engineers are not using the concept to save time, costs and reduce risks associated with new frameworks. Hence, a new framework, which can be considered as a practical approach is proposed that borrows concepts from soft systems method, agile systems development and object-oriented analysis and design to bridge the gap between theory and practice. The approach will enable the development of systems using socio-technical systems theory to attract/enable the system engineers/software developers to use socio-technical systems theory in building worthwhile information systems to avoid fragilities and hostilities in the work environment.

Keywords: Socio-technical systems, human centered design, software engineering, cognitive engineering, soft systems, systems engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2840
524 Evaluating Urban Land Expansion Using Geographic Information System and Remote Sensing in Kabul City, Afghanistan

Authors: Ahmad Sharif Ahmadi, Yoshitaka Kajita

Abstract:

With massive population expansion and fast economic development in last decade, urban land has increasingly expanded and formed high informal development territory in Kabul city. This paper investigates integrated urbanization trends in Kabul city since the formation of the basic structure of the present city using GIS and remote sensing. This study explores the spatial and temporal difference of urban land expansion and land use categories among different time intervals, 1964-1978 and 1978-2008 from 1964 to 2008 in Kabul city. Furthermore, the goal of this paper is to understand the extent of urban land expansion and the factors driving urban land expansion in Kabul city. Many factors like population expansion, the return of refugees from neighboring countries and significant economic growth of the city affected urban land expansion. Across all the study area urban land expansion rate, population expansion rate and economic growth rate have been compared to analyze the relationship of driving forces with urban land expansion. Based on urban land change data detected by interpreting land use maps, it was found that in the entire study area the urban territory has been expanded by 14 times between 1964 and 2008.

Keywords: GIS, Kabul city, land use, urban land expansion, urbanization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668