Search results for: calculation tasks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1122

Search results for: calculation tasks

102 Stereo Motion Tracking

Authors: Yudhajit Datta, Jonathan Bandi, Ankit Sethia, Hamsi Iyer

Abstract:

Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.

Keywords: Kalman Filter, Stereo Vision, Motion Tracking, Matlab, Object Tracking, Camera Calibration, Computer Vision System Toolbox.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2822
101 Managing Iterations in Product Design and Development

Authors: K. Aravindhan, Trishit Bandyopadhyay, Mahesh Mehendale, Supriya Kumar De

Abstract:

The inherent iterative nature of product design and development poses significant challenge to reduce the product design and development time (PD). In order to shorten the time to market, organizations have adopted concurrent development where multiple specialized tasks and design activities are carried out in parallel. Iterative nature of work coupled with the overlap of activities can result in unpredictable time to completion and significant rework. Many of the products have missed the time to market window due to unanticipated or rather unplanned iteration and rework. The iterative and often overlapped processes introduce greater amounts of ambiguity in design and development, where the traditional methods and tools of project management provide less value. In this context, identifying critical metrics to understand the iteration probability is an open research area where significant contribution can be made given that iteration has been the key driver of cost and schedule risk in PD projects. Two important questions that the proposed study attempts to address are: Can we predict and identify the number of iterations in a product development flow? Can we provide managerial insights for a better control over iteration? The proposal introduces the concept of decision points and using this concept intends to develop metrics that can provide managerial insights into iteration predictability. By characterizing the product development flow as a network of decision points, the proposed research intends to delve further into iteration probability and attempts to provide more clarity.

Keywords: Decision Points, Iteration, Product Design, Rework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2192
100 The Importance of Zakat in Struggle against Circle of Poverty and Income Redistribution

Authors: Hasan Bulent Kantarcı

Abstract:

This paper examines how “Zakat” provides fair income redistribution and aids the struggle against poverty. Providing fair income redistribution and combating poverty constitutes some of the fundamental tasks performed by countries all over the world. Each country seeks a solution for these problems according to their political, economic and administrative styles through applying various economic and financial policies. The same situation can be handled via “zakat” association in Islam. Nowadays, we observe different versions of “zakat” in developed countries. Applications such as negative income tax denote merely a different form of “zakat” that is being applied almost in the same way but under changed names. However, the minimum values to donate under zakat (e.g. 85 gr. gold and 40 animals) get altered and various amounts are put into practice. It might be named as negative income tax instead of zakat, nonetheless, these applications are based on the Holy Koran and the hadith released 1400 years ago. Besides, considering the savage and slavery in the world at those times, we might easily recognize the true value of the zakat being applied for the first time then in the Islamic system. Through zakat, governments are able to transfer incomes to the poor as a means of enabling them achieve the minimum standard of living required. With regards to who benefits from the Zakat, an objective and fair criteria was used to determine who benefits from the zakat contrary to the notion that it was based on peoples’ own choices. Since the zakat is obligatory, the transfers do not get forwarded directly but via the government and get distributed, which requires vast governmental organizations. Through the application of Zakat, reduced levels of poverty can be achieved and also ensure the fair income redistribution.

Keywords: Cycle of poverty, Islamic finance, income redistribution, zakat.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2306
99 Thermal Analysis of a Transport Refrigeration Power Pack Unit Using a Coupled 1D/3D Simulation Approach

Authors: A. Kospach, A. Mladek, M. Waltenberger, F. Schilling

Abstract:

In this work, a coupled 1D/3D simulation approach for thermal protection and optimization of a trailer refrigeration power pack unit was developed. With the developed 1D/3D simulation approach thermal critical scenarios, such as summer, high-load scenarios are investigated. The 1D thermal model was built up consisting of the thermal network, which includes different point masses and associated heat transfers, the coolant and oil circuits, as well as the fan unit. The 3D computational fluid dynamics (CFD) model was developed to model the air flow through the power pack unit considering convective heat transfer effects. In the 1D thermal model the temperatures of the individual point masses were calculated, which served as input variables for the 3D CFD model. For the calculation of the point mass temperatures in the 1D thermal model, the convective heat transfer rates from the 3D CFD model were required as input variables. These two variables (point mass temperatures and convective heat transfer rates) were the main couple variables for the coupled 1D/3D simulation model. The coupled 1D/3D model was validated with measurements under normal operating conditions. Coupled simulations for summer high-load case were than performed and compared with a reference case under normal operation conditions. Hot temperature regions and components could be identified. Due to the detailed information about the flow field, temperatures and heat fluxes, it was possible to directly derive improvement suggestions for the cooling design of the transport refrigeration power pack unit.

Keywords: Coupled thermal simulation, thermal analysis, transport refrigeration unit, 3D computational fluid dynamics, 1D thermal modelling, thermal management systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 206
98 Primary Level Teachers’ Response to Gender Representation in Textbook Contents

Authors: Pragya Paneru

Abstract:

This paper explores altogether 10 primary teachers’ views on gender representation in primary level textbooks. Data were collected from the teachers who taught in private schools in the Kailali and Kathmandu districts. This research uses a semi-structured interview method to obtain information regarding teachers’ attitudes toward gender representations in textbook contents. The interview data were analysed by using critical skills of qualitative research. The findings revealed that most of the teachers were unaware and regarded gender issues as insignificant to discuss in primary-level classes. Most of them responded to the questions personally and claimed that there were no gender issues in their classrooms. Some of the teachers connected gender issues with contexts other than textbook representations such as school discrimination in the distribution of salary among male and female teachers, school practices of awarding girls rather than boys as the most disciplined students, following girls’ first rule in the assembly marching, encouraging only girls in the stage shows, and involving students in gender-specific activities such as decorating works for girls and physical tasks for boys. The interview also revealed teachers’ covert gendered attitudes in their remarks. Nevertheless, most of the teachers accepted that gender-biased contents have an impact on learners and this problem can be solved with more gender-centred research in the education field, discussions, and training to increase awareness regarding gender issues. Agreeing with the suggestion of teachers, this paper recommends proper training and awareness regarding how to confront gender issues in textbooks.

Keywords: Content analysis, gender equality, school education, critical awareness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 243
97 Teacher Training Course: Conflict Resolution through Mediation

Authors: Csilla M. Szabó

Abstract:

In Hungary, the society has changed a lot for the past 25 years, and these changes could be detected in educational situations as well. The number and the intensity of conflicts have been increased in most fields of life, as well as at schools. Teachers have difficulties to be able to handle school conflicts. What is more, the new net generation, generation Z has values and behavioural patterns different from those of the previous one, which might generate more serious conflicts at school, especially with teachers who were mainly socialising in a traditional teacher – student relationship. In Hungary, the bill CCIV of 2011 declared the foundation of Institutes of Teacher Training in higher education institutes. One of the tasks of the Institutes is to survey the competences and needs of teachers working in public education and to provide further trainings and services for them according to their needs and requirements. This job is supported by the Social Renewal Operative Programs 4.1.2.B. The professors of a college carried out a questionnaire and surveyed the needs and the requirements of teachers working in the region. Based on the results, the professors of the Institute of Teacher Training decided to meet the requirements of teachers and to launch short teacher further training courses in spring 2015. One of the courses is going to focus on school conflict management through mediation. The aim of the pilot course is to provide conflict management techniques for teachers and to present different mediation techniques to them. The theoretical part of the course (5 hours) will enable participants to understand the main points and the advantages of mediation, while the practical part (10 hours) will involve teachers in role plays to learn how to cope with conflict situations applying mediation. We hope if conflicts could be reduced, it would influence school atmosphere in a positive way and the teaching – learning process could be more successful and effective.

Keywords: Conflict resolution, generation Z, mediation, teacher training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
96 Lateral Torsional Buckling Resistance of Trapezoidally Corrugated Web Girders

Authors: Annamária Käferné Rácz, Bence Jáger, Balázs Kövesdi, László Dunai

Abstract:

Due to the numerous advantages of steel corrugated web girders, its application field is growing for bridges as well as for buildings. The global stability behavior of such girders is significantly larger than those of conventional I-girders with flat web, thus the application of the structural steel material can be significantly reduced. Design codes and specifications do not provide clear and complete rules or recommendations for the determination of the lateral torsional buckling (LTB) resistance of corrugated web girders. Therefore, the authors made a thorough investigation regarding the LTB resistance of the corrugated web girders. Finite element (FE) simulations have been performed to develop new design formulas for the determination of the LTB resistance of trapezoidally corrugated web girders. FE model is developed considering geometrical and material nonlinear analysis using equivalent geometric imperfections (GMNI analysis). The equivalent geometric imperfections involve the initial geometric imperfections and residual stresses coming from rolling, welding and flame cutting. Imperfection sensitivity analysis was performed to determine the necessary magnitudes regarding only the first eigenmodes shape imperfections. By the help of the validated FE model, an extended parametric study is carried out to investigate the LTB resistance for different trapezoidal corrugation profiles. First, the critical moment of a specific girder was calculated by FE model. The critical moments from the FE calculations are compared to the previous analytical calculation proposals. Then, nonlinear analysis was carried out to determine the ultimate resistance. Due to the numerical investigations, new proposals are developed for the determination of the LTB resistance of trapezoidally corrugated web girders through a modification factor on the design method related to the conventional flat web girders.

Keywords: Critical moment, FE modeling, lateral torsional buckling, trapezoidally corrugated web girders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1204
95 Rotation Invariant Fusion of Partial Image Parts in Vista Creation using Missing View Regeneration

Authors: H. B. Kekre, Sudeep D. Thepade

Abstract:

The automatic construction of large, high-resolution image vistas (mosaics) is an active area of research in the fields of photogrammetry [1,2], computer vision [1,4], medical image processing [4], computer graphics [3] and biometrics [8]. Image stitching is one of the possible options to get image mosaics. Vista Creation in image processing is used to construct an image with a large field of view than that could be obtained with a single photograph. It refers to transforming and stitching multiple images into a new aggregate image without any visible seam or distortion in the overlapping areas. Vista creation process aligns two partial images over each other and blends them together. Image mosaics allow one to compensate for differences in viewing geometry. Thus they can be used to simplify tasks by simulating the condition in which the scene is viewed from a fixed position with single camera. While obtaining partial images the geometric anomalies like rotation, scaling are bound to happen. To nullify effect of rotation of partial images on process of vista creation, we are proposing rotation invariant vista creation algorithm in this paper. Rotation of partial image parts in the proposed method of vista creation may introduce some missing region in the vista. To correct this error, that is to fill the missing region further we have used image inpainting method on the created vista. This missing view regeneration method also overcomes the problem of missing view [31] in vista due to cropping, irregular boundaries of partial image parts and errors in digitization [35]. The method of missing view regeneration generates the missing view of vista using the information present in vista itself.

Keywords: Vista, Overlap Estimation, Rotation Invariance, Missing View Regeneration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723
94 Application of Particle Image Velocimetry in the Analysis of Scale Effects in Granular Soil

Authors: Zuhair Kadhim Jahanger, S. Joseph Antony

Abstract:

The available studies in the literature which dealt with the scale effects of strip footings on different sand packing systematically still remain scarce. In this research, the variation of ultimate bearing capacity and deformation pattern of soil beneath strip footings of different widths under plane-strain condition on the surface of loose, medium-dense and dense sand have been systematically studied using experimental and noninvasive methods for measuring microscopic deformations. The presented analyses are based on model scale compression test analysed using Particle Image Velocimetry (PIV) technique. Upper bound analysis of the current study shows that the maximum vertical displacement of the sand under the ultimate load increases for an increase in the width of footing, but at a decreasing rate with relative density of sand, whereas the relative vertical displacement in the sand decreases for an increase in the width of the footing. A well agreement is observed between experimental results for different footing widths and relative densities. The experimental analyses have shown that there exists pronounced scale effect for strip surface footing. The bearing capacity factors rapidly decrease up to footing widths B=0.25 m, 0.35 m, and 0.65 m for loose, medium-dense and dense sand respectively, after that there is no significant decrease in . The deformation modes of the soil as well as the ultimate bearing capacity values have been affected by the footing widths. The obtained results could be used to improve settlement calculation of the foundation interacting with granular soil.

Keywords: PIV, granular mechanics, scale effect, upper bound analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1009
93 Structural Behavior of Precast Foamed Concrete Sandwich Panel Subjected to Vertical In-Plane Shear Loading

Authors: Y. H. Mugahed Amran, Raizal S. M. Rashid, Farzad Hejazi, Nor Azizi Safiee, A. A. Abang Ali

Abstract:

Experimental and analytical studies were accomplished to examine the structural behavior of precast foamed concrete sandwich panel (PFCSP) under vertical in-plane shear load. PFCSP full-scale specimens with total number of six were developed with varying heights to study an important parameter slenderness ratio (H/t). The production technique of PFCSP and the procedure of test setup were described. The results obtained from the experimental tests were analysed in the context of in-plane shear strength capacity, load-deflection profile, load-strain relationship, slenderness ratio, shear cracking patterns and mode of failure. Analytical study of finite element analysis was implemented and the theoretical calculations of the ultimate in-plane shear strengths using the adopted ACI318 equation for reinforced concrete wall were determined aimed at predicting the in-plane shear strength of PFCSP. The decrease in slenderness ratio from 24 to 14 showed an increase of 26.51% and 21.91% on the ultimate in-plane shear strength capacity as obtained experimentally and in FEA models, respectively. The experimental test results, FEA models data and theoretical calculation values were compared and provided a significant agreement with high degree of accuracy. Therefore, on the basis of the results obtained, PFCSP wall has the potential use as an alternative to the conventional load-bearing wall system.

Keywords: Deflection profiles, foamed concrete, load-strain relationships, precast foamed concrete sandwich panel, slenderness ratio, vertical in-plane shear strength capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2647
92 Validation of the Linear Trend Estimation Technique for Prediction of Average Water and Sewerage Charge Rate Prices in the Czech Republic

Authors: Aneta Oblouková, Eva Vítková

Abstract:

The article deals with the issue of water and sewerage charge rate prices in the Czech Republic. The research is specifically focused on the analysis of the development of the average prices of water and sewerage charge rate in the Czech Republic in 1994-2021 and on the validation of the chosen methodology relevant for the prediction of the development of the average prices of water and sewerage charge rate in the Czech Republic. The research is based on data collection. The data for this research were obtained from the Czech Statistical Office. The aim of the paper is to validate the relevance of the mathematical linear trend estimate technique for the calculation of the predicted average prices of water and sewerage charge rates. The real values of the average prices of water and sewerage charge rates in the Czech Republic in 1994-2018 were obtained from the Czech Statistical Office and were converted into a mathematical equation. The same type of real data was obtained from the Czech Statistical Office for 2019-2021. Prediction of the average prices of water and sewerage charge rates in the Czech Republic in 2019-2021 was also calculated using a chosen method – a linear trend estimation technique. The values obtained from the Czech Statistical Office and the values calculated using the chosen methodology were subsequently compared. The research result is a validation of the chosen mathematical technique to be a suitable technique for this research.

Keywords: Czech Republic, linear trend estimation, price prediction, water and sewerage charge rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202
91 Sperm Whale Signal Analysis: Comparison using the Auto Regressive model and the Daubechies 15 Wavelets Transform

Authors: Olivier Adam, Maciej Lopatka, Christophe Laplanche, Jean-François Motsch

Abstract:

This article presents the results using a parametric approach and a Wavelet Transform in analysing signals emitting from the sperm whale. The extraction of intrinsic characteristics of these unique signals emitted by marine mammals is still at present a difficult exercise for various reasons: firstly, it concerns non-stationary signals, and secondly, these signals are obstructed by interfering background noise. In this article, we compare the advantages and disadvantages of both methods: Auto Regressive models and Wavelet Transform. These approaches serve as an alternative to the commonly used estimators which are based on the Fourier Transform for which the hypotheses necessary for its application are in certain cases, not sufficiently proven. These modern approaches provide effective results particularly for the periodic tracking of the signal's characteristics and notably when the signal-to-noise ratio negatively effects signal tracking. Our objectives are twofold. Our first goal is to identify the animal through its acoustic signature. This includes recognition of the marine mammal species and ultimately of the individual animal (within the species). The second is much more ambitious and directly involves the intervention of cetologists to study the sounds emitted by marine mammals in an effort to characterize their behaviour. We are working on an approach based on the recordings of marine mammal signals and the findings from this data result from the Wavelet Transform. This article will explore the reasons for using this approach. In addition, thanks to the use of new processors, these algorithms once heavy in calculation time can be integrated in a real-time system.

Keywords: Autoregressive model, Daubechies Wavelet, Fourier Transform, marine mammals, signal processing, spectrogram, sperm whale, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2005
90 The Impact of Regulatory Changes on the Development of Mobile Medical Apps

Authors: M. McHugh, D. Lillis

Abstract:

Mobile applications are being used to perform a wide variety of tasks in day-to-day life, ranging from checking email to controlling your home heating. Application developers have recognized the potential to transform a smart device into a medical device, by using a mobile medical application i.e. a mobile phone or a tablet. When initially conceived these mobile medical applications performed basic functions e.g. BMI calculator, accessing reference material etc.; however, increasing complexity offers clinicians and patients a range of functionality. As this complexity and functionality increases, so too does the potential risk associated with using such an application. Examples include any applications that provide the ability to inflate and deflate blood pressure cuffs, as well as applications that use patient-specific parameters and calculate dosage or create a dosage plan for radiation therapy. If an unapproved mobile medical application is marketed by a medical device organization, then they face significant penalties such as receiving an FDA warning letter to cease the prohibited activity, fines and possibility of facing a criminal conviction. Regulatory bodies have finalized guidance intended for mobile application developers to establish if their applications are subject to regulatory scrutiny. However, regulatory controls appear contradictory with the approaches taken by mobile application developers who generally work with short development cycles and very little documentation and as such, there is the potential to stifle further improvements due to these regulations. The research presented as part of this paper details how by adopting development techniques, such as agile software development, mobile medical application developers can meet regulatory requirements whilst still fostering innovation.

Keywords: Medical, mobile, applications, software Engineering, FDA, standards, regulations, agile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2063
89 Structure of the Working Time of Nurses in Emergency Departments in Polish Hospitals

Authors: Jadwiga Klukow, Anna Ksykiewicz-Dorota

Abstract:

An analysis of the distribution of nurses’ working time constitutes vital information for the management in planning employment. The objective of the study was to analyze the distribution of nurses’ working time in an emergency department. The study was conducted in an emergency department of a teaching hospital in Lublin, in Southeast Poland. The catalogue of activities performed by nurses was compiled by means of continuous observation. Identified activities were classified into four groups: Direct care, indirect care, coordination of work in the department and personal activities. Distribution of nurses’ working time was determined by work sampling observation (Tippett) at random intervals. The research project was approved by the Research Ethics Committee by the Medical University of Lublin (Protocol 0254/113/2010). On average, nurses spent 31% of their working time on direct care, 47% on indirect care, 12% on coordinating work in the department and 10% on personal activities. The most frequently performed direct care tasks were diagnostic activities – 29.23% and treatment-related activities – 27.69%. The study has provided information on the complexity of performed activities and utilization of nurses’ working time. Enhancing the effectiveness of nursing actions requires working out a strategy for improved management of the time nurses spent at work. Increasing the involvement of auxiliary staff and optimizing communication processes within the team may lead to reduction of the time devoted to indirect care for the benefit of direct care.

Keywords: Emergency nurses, nursing care, workload, work sampling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
88 Distinctive Features of Legal Relations in the Area of Subsoil Use, Renewal and Protection in Ukraine

Authors: N. Maksimentseva

Abstract:

The issue of public administration in subsoil use, renewal and protection is of high importance for Ukraine since it is strongly linked to energy security of the state as well as it shall facilitate the people of Ukraine to efficiently implement its propitiatory rights towards natural resources and redistribution of national wealth. As it is stipulated in the Article 11 of the Subsoil Code of Ukraine (the Code) the authorities that administer the industry are limited to central executive bodies and local governments. In particular, it is stipulated in the Code that the Ukraine’s Cabinet of Ministers carries out public administration in geological exploration, production and protection of subsoil. Other state bodies of public administration include central public authority responsible for state environmental protection policies; central public authority in charge of implementation of state geological exploration and efficient subsoil use policies; central authority in charge of state health and safety control policies. There are also public authorities in the Autonomous Republic of Crimea; local executive bodies and other state authorities and local self-government authorities in compliance with laws of Ukraine. This article is devoted to the analysis of the legal relations in the area of public administration of subsoil use, renewal and protection in Ukraine. The main approaches to study the essence of legal relations in the named area as well as its tasks, functions and methods are analyzed. It is concluded in this article that legal relationship in the field of public administration of subsoil use, renewal and protection is characterized by specifics of its task (development of natural resources).

Keywords: Legal relations, public administration, Subsoil Code of Ukraine, subsoil use, renewal and protection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093
87 Identification of Spam Keywords Using Hierarchical Category in C2C E-commerce

Authors: Shao Bo Cheng, Yong-Jin Han, Se Young Park, Seong-Bae Park

Abstract:

Consumer-to-Consumer (C2C) E-commerce has been growing at a very high speed in recent years. Since identical or nearly-same kinds of products compete one another by relying on keyword search in C2C E-commerce, some sellers describe their products with spam keywords that are popular but are not related to their products. Though such products get more chances to be retrieved and selected by consumers than those without spam keywords, the spam keywords mislead the consumers and waste their time. This problem has been reported in many commercial services like ebay and taobao, but there have been little research to solve this problem. As a solution to this problem, this paper proposes a method to classify whether keywords of a product are spam or not. The proposed method assumes that a keyword for a given product is more reliable if the keyword is observed commonly in specifications of products which are the same or the same kind as the given product. This is because that a hierarchical category of a product in general determined precisely by a seller of the product and so is the specification of the product. Since higher layers of the hierarchical category represent more general kinds of products, a reliable degree is differently determined according to the layers. Hence, reliable degrees from different layers of a hierarchical category become features for keywords and they are used together with features only from specifications for classification of the keywords. Support Vector Machines are adopted as a basic classifier using the features, since it is powerful, and widely used in many classification tasks. In the experiments, the proposed method is evaluated with a golden standard dataset from Yi-han-wang, a Chinese C2C E-commerce, and is compared with a baseline method that does not consider the hierarchical category. The experimental results show that the proposed method outperforms the baseline in F1-measure, which proves that spam keywords are effectively identified by a hierarchical category in C2C E-commerce.

Keywords: Spam Keyword, E-commerce, keyword features, spam filtering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2508
86 A New Distribution Network Reconfiguration Approach using a Tree Model

Authors: E. Dolatdar, S. Soleymani, B. Mozafari

Abstract:

Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.

Keywords: Distribution System, Reconfiguration, Loss Reduction , Graph Theory , Optimization , Genetic Algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3782
85 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: linked open data, information integration, digital libraries, data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 730
84 Use of Hair as an Indicator of Environmental Lead Pollution: Changes after Twenty Years of Phasing Out Leaded Gasoline

Authors: M. A. Abou Donia, A. A. K. Abou-Arab, Nevin E. Sharaf, A. K. Enab, Sherif R. Mohamed

Abstract:

Lead (Pb) poisoning is one of the most common and preventable environmental health problems. There are different sources of environmental pollution with lead as lead alkyl additives in petrol and manufacturing processes. Pb in the atmosphere can be deposited in urban soils, and may then be re-suspended to re-enter the atmosphere. This could increase human exposure to Pb and cause long-term health effects. Thus, monitoring Pb pollution is considered one of the major tasks in controlling pollution. Scalp hair can be utilized for the determination of lead (Pb) concentration. It provides a lasting record of metal intakes of weeks or even months, and for most metals, their accumulation in hair reflects their accumulation in the whole body. This work was conducted to investigate the concentration of lead in male scalp hair of Cairo (residential-traffic and residential-industrial) and rural residents after twenty years of phasing out of leaded gasoline. Results indicated that the mean concentration of lead in hair of residential-traffic (9.7552 μg/g ±0.71) and residential-industrial (12.3288 μg/g ±1.13) was significantly higher than that in rural residents (4.7327 μg/g ±0.67). The mean concentration of lead in hair of resident’s industrial areas was the highest among Cairo residents and not the traffic areas as it was before phasing out of leaded gasoline. Twenty years of phasing out of leaded gasoline in Cairo has greatly improved the lead pollution among residents of traffic areas, but industrial areas residents were still suffering from lead pollution, which needs more efforts to control the sources of lead pollution.

Keywords: Heavy metals, lead, hair, biological sample, urban pollution, rural pollution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
83 The Performance of Natural Light by Roof Systems in Cultural Buildings

Authors: Ana Paula Esteves, Diego S. Caetano, Louise L. B. Lomardo

Abstract:

This paper presents an approach to the performance of the natural lighting, when the use of appropriated solar lighting systems on the roof is applied in cultural buildings such as museums and foundations. The roofs, as a part of contact between the building and the external environment, require special attention in projects that aim at energy efficiency, being an important element for the capture of natural light in greater quantity, but also for being the most important point of generation of photovoltaic solar energy, even semitransparent, allowing the partial passage of light. Transparent elements in roofs, as well as superior protection of the building, can also play other roles, such as: meeting the needs of natural light for the accomplishment of the internal tasks, attending to the visual comfort; to bring benefits to the human perception and about the interior experience in a building. When these resources are well dimensioned, they also contribute to the energy efficiency and consequent character of sustainability of the building. Therefore, when properly designed and executed, a roof light system can bring higher quality natural light to the interior of the building, which is related to the human health and well-being dimension. Furthermore, it can meet the technologic, economic and environmental yearnings, making possible the more efficient use of that primordial resource, which is the light of the Sun. The article presents the analysis of buildings that used zenith light systems in search of better lighting performance in museums and foundations: the Solomon R. Guggenheim Museum in the United States, the Iberê Camargo Foundation in Brazil, the Museum of Fine Arts in Castellón in Spain and the Pinacoteca of São Paulo.

Keywords: Natural lighting, roof lighting systems, natural lighting in museums, comfort lighting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1071
82 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers’ equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 516
81 Experimental Simulation Set-Up for Validating Out-Of-The-Loop Mitigation when Monitoring High Levels of Automation in Air Traffic Control

Authors: Oliver Ohneiser, Francesca De Crescenzio, Gianluca Di Flumeri, Jan Kraemer, Bruno Berberian, Sara Bagassi, Nicolina Sciaraffa, Pietro Aricò, Gianluca Borghini, Fabio Babiloni

Abstract:

An increasing degree of automation in air traffic will also change the role of the air traffic controller (ATCO). ATCOs will fulfill significantly more monitoring tasks compared to today. However, this rather passive role may lead to Out-Of-The-Loop (OOTL) effects comprising vigilance decrement and less situation awareness. The project MINIMA (Mitigating Negative Impacts of Monitoring high levels of Automation) has conceived a system to control and mitigate such OOTL phenomena. In order to demonstrate the MINIMA concept, an experimental simulation set-up has been designed. This set-up consists of two parts: 1) a Task Environment (TE) comprising a Terminal Maneuvering Area (TMA) simulator as well as 2) a Vigilance and Attention Controller (VAC) based on neurophysiological data recording such as electroencephalography (EEG) and eye-tracking devices. The current vigilance level and the attention focus of the controller are measured during the ATCO’s active work in front of the human machine interface (HMI). The derived vigilance level and attention trigger adaptive automation functionalities in the TE to avoid OOTL effects. This paper describes the full-scale experimental set-up and the component development work towards it. Hence, it encompasses a pre-test whose results influenced the development of the VAC as well as the functionalities of the final TE and the two VAC’s sub-components.

Keywords: Automation, human factors, air traffic controller, MINIMA, OOTL, Out-Of-The-Loop, EEG, electroencephalography, HMI, human machine interface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1452
80 Value Index, a Novel Decision Making Approach for Waste Load Allocation

Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani

Abstract:

Waste load allocation (WLA) policies may use multiobjective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.

Keywords: Waste load allocation (WLA), Value index, Multi objective particle swarm optimization (MOPSO), Haraz River, Equity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027
79 Comparative Usability Study of the Websites of Top Universities in Three Continents: A Case Study of the University of Cape Town, Oxford University, and Harvard University

Authors: Stephen Akuma, Racheal Aluma, Abraham Undu

Abstract:

Academic websites play an important role in promoting education for all. They allow universities to provide users with digital academic services to save time and resources. A university website is not only a cost-effective and timely way to communicate with a variety of stakeholders, such as students, faculty, and visitors, but it is also a vehicle for the university to shape its image. The quality of a website is a major factor that universities consider in cyberspace. Potential students can easily apply to universities where the website provides useful and clear information. This has made the usability of websites an important area in meeting the needs and expectations of website users. In this paper, a comparative usability study of the University of Cape Town, Oxford University, and Harvard University academic websites (http://www.uct.ac.za/, https://www.ox.ac.uk/, and https://www.harvard.edu/) was carried out. The proactive user feedback technique was adopted for the comparative usability assessment of the aforementioned universities. The method was used by the researchers to collect and log records from the participants in real time. The result shows that the average dwell time on the websites of Harvard University, Oxford University, and Cape Town University in seconds for the three tasks are 51.58, 33.28, and 54.82 respectively. The System Usability Scale (SUS) scores for Harvard, Oxford, and the University of Cape Town are 49.81, 69.43, and 54.14 respectively. The result of the Analysis of Variance on the dwell time data shows a significant difference (p = .009) on the three websites. Our findings show that Oxford University has the most suitable website in terms of usability factors and other metrics than the other websites investigated. Practical implications are highlighted, and recommendations for improved website usability are suggested.

Keywords: Usability factors, user feedback, university websites, University of Cape Town, Harvard University, Oxford University.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159
78 Review of the Road Crash Data Availability in Iraq

Authors: Abeer K. Jameel, Harry Evdorides

Abstract:

Iraq is a middle income country where the road safety issue is considered one of the leading causes of deaths. To control the road risk issue, the Iraqi Ministry of Planning, General Statistical Organization started to organise a collection system of traffic accidents data with details related to their causes and severity. These data are published as an annual report. In this paper, a review of the available crash data in Iraq will be presented. The available data represent the rate of accidents in aggregated level and classified according to their types, road users’ details, and crash severity, type of vehicles, causes and number of causalities. The review is according to the types of models used in road safety studies and research, and according to the required road safety data in the road constructions tasks. The available data are also compared with the road safety dataset published in the United Kingdom as an example of developed country. It is concluded that the data in Iraq are suitable for descriptive and exploratory models, aggregated level comparison analysis, and evaluation and monitoring the progress of the overall traffic safety performance. However, important traffic safety studies require disaggregated level of data and details related to the factors of the likelihood of traffic crashes. Some studies require spatial geographic details such as the location of the accidents which is essential in ranking the roads according to their level of safety, and name the most dangerous roads in Iraq which requires tactic plan to control this issue. Global Road safety agencies interested in solve this problem in low and middle-income countries have designed road safety assessment methodologies which are basing on the road attributes data only. Therefore, in this research it is recommended to use one of these methodologies.

Keywords: Data availability, Iraq, road safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 931
77 Evaluation and Analysis of Lean-Based Manufacturing Equipment and Technology System for Jordanian Industries

Authors: Mohammad D. AL-Tahat, Shahnaz M. Alkhalil

Abstract:

International markets driven forces are changing continuously, therefore companies need to gain a competitive edge in such markets. Improving the company's products, processes and practices is no longer auxiliary. Lean production is a production management philosophy that consolidates work tasks with minimum waste resulting in improved productivity. Lean production practices can be mapped into many production areas. One of these is Manufacturing Equipment and Technology (MET). Many lean production practices can be implemented in MET, namely, specific equipment configurations, total preventive maintenance, visual control, new equipment/ technologies, production process reengineering and shared vision of perfection.The purpose of this paper is to investigate the implementation level of these six practices in Jordanian industries. To achieve that a questionnaire survey has been designed according to five-point Likert scale. The questionnaire is validated through pilot study and through experts review. A sample of 350 Jordanian companies were surveyed, the response rate was 83%. The respondents were asked to rate the extent of implementation for each of practices. A relationship conceptual model is developed, hypotheses are proposed, and consequently the essential statistical analyses are then performed. An assessment tool that enables management to monitor the progress and the effectiveness of lean practices implementation is designed and presented. Consequently, the results show that the average implementation level of lean practices in MET is 77%, Jordanian companies are implementing successfully the considered lean production practices, and the presented model has Cronbach-s alpha value of 0.87 which is good evidence on model consistency and results validation.

Keywords: Lean Production, SME applications, Visual Control, New equipment/technologies, Specific equipment configurations, Jordan

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2297
76 Building an Arithmetic Model to Assess Visual Consistency in Townscape

Authors: Dheyaa Hussein, Peter Armstrong

Abstract:

The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.

Keywords: Townscape, Urban Design, Visual Assessment, Visual Consistency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
75 Technical Aspects of Closing the Loop in Depth-of-Anesthesia Control

Authors: Gorazd Karer

Abstract:

When performing a diagnostic procedure or surgery in general anesthesia (GA), a proper introduction and dosing of anesthetic agents is one of the main tasks of the anesthesiologist. That being said, depth of anesthesia (DoA) also seems to be a suitable process for closed-loop control implementation. To implement such a system, one must be able to acquire the relevant signals online and in real-time, as well as stream the calculated control signal to the infusion pump. However, during a procedure, patient monitors and infusion pumps are purposely unable to connect to an external (possibly medically unapproved) device for safety reasons, thus preventing closed-loop control. This paper proposes a conceptual solution to the aforementioned problem. First, it presents some important aspects of contemporary clinical practice. Next, it introduces the closed-loop-control-system structure and the relevant information flow. Focusing on transferring the data from the patient to the computer, it presents a non-invasive image-based system for signal acquisition from a patient monitor for online depth-of-anesthesia assessment. Furthermore, it introduces a User-Datagram-Protocol-based (UDP-based) communication method that can be used for transmitting the calculated anesthetic inflow to the infusion pump. The proposed system is independent of medical-device manufacturer and is implemented in MATLAB-Simulink, which can be conveniently used for DoA control implementation. The proposed scheme has been tested in a simulated GA setting and is ready to be evaluated in an operating theatre. However, the proposed system is only a step towards a proper closed-loop control system for DoA, which could routinely be used in clinical practice.

Keywords: Closed-loop control, Depth of Anesthesia, DoA, optical signal acquisition, Patient State index, PSi, UDP communication protocol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 523
74 Critical Assessment of Scoring Schemes for Protein-Protein Docking Predictions

Authors: Dhananjay C. Joshi, Jung-Hsin Lin

Abstract:

Protein-protein interactions (PPI) play a crucial role in many biological processes such as cell signalling, transcription, translation, replication, signal transduction, and drug targeting, etc. Structural information about protein-protein interaction is essential for understanding the molecular mechanisms of these processes. Structures of protein-protein complexes are still difficult to obtain by biophysical methods such as NMR and X-ray crystallography, and therefore protein-protein docking computation is considered an important approach for understanding protein-protein interactions. However, reliable prediction of the protein-protein complexes is still under way. In the past decades, several grid-based docking algorithms based on the Katchalski-Katzir scoring scheme were developed, e.g., FTDock, ZDOCK, HADDOCK, RosettaDock, HEX, etc. However, the success rate of protein-protein docking prediction is still far from ideal. In this work, we first propose a more practical measure for evaluating the success of protein-protein docking predictions,the rate of first success (RFS), which is similar to the concept of mean first passage time (MFPT). Accordingly, we have assessed the ZDOCK bound and unbound benchmarks 2.0 and 3.0. We also createda new benchmark set for protein-protein docking predictions, in which the complexes have experimentally determined binding affinity data. We performed free energy calculation based on the solution of non-linear Poisson-Boltzmann equation (nlPBE) to improve the binding mode prediction. We used the well-studied thebarnase-barstarsystem to validate the parameters for free energy calculations. Besides,thenlPBE-based free energy calculations were conducted for the badly predicted cases by ZDOCK and ZRANK. We found that direct molecular mechanics energetics cannot be used to discriminate the native binding pose from the decoys.Our results indicate that nlPBE-based calculations appeared to be one of the promising approaches for improving the success rate of binding pose predictions.

Keywords: protein-protein docking, protein-protein interaction, molecular mechanics energetics, Poisson-Boltzmann calculations

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
73 The Design of Multiple Detection Parallel Combined Spread Spectrum Communication System

Authors: Lixin Tian, Wei Xue

Abstract:

Many jobs in society go underground, such as mine mining, tunnel construction and subways, which are vital to the development of society. Once accidents occur in these places, the interruption of traditional wired communication is not conducive to the development of rescue work. In order to realize the positioning, early warning and command functions of underground personnel and improve rescue efficiency, it is necessary to develop and design an emergency ground communication system. It is easy to be subjected to narrowband interference when performing conventional underground communication. Spreading communication can be used for this problem. However, general spread spectrum methods such as direct spread communication are inefficient, so it is proposed to use parallel combined spread spectrum (PCSS) communication to improve efficiency. The PCSS communication not only has the anti-interference ability and the good concealment of the traditional spread spectrum system, but also has a relatively high frequency band utilization rate and a strong information transmission capability. So, this technology has been widely used in practice. This paper presents a PCSS communication model-multiple detection parallel combined spread spectrum (MDPCSS) communication system. In this paper, the principle of MDPCSS communication system is described, that is, the sequence at the transmitting end is processed in blocks and cyclically shifted to facilitate multiple detection at the receiving end. The block diagrams of the transmitter and receiver of the MDPCSS communication system are introduced. At the same time, the calculation formula of the system bit error rate (BER) is introduced, and the simulation and analysis of the BER of the system are completed. By comparing with the common parallel PCSS communication, we can draw a conclusion that it is indeed possible to reduce the BER and improve the system performance. Furthermore, the influence of different pseudo-code lengths selected on the system BER is simulated and analyzed, and the conclusion is that the larger the pseudo-code length is, the smaller the system error rate is.

Keywords: Cyclic shift, multiple detection, parallel combined spread spectrum, PN code.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 552