Search results for: Software Engineering Methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6509

Search results for: Software Engineering Methods

749 Perception of Predictive Confounders for the Prevalence of Hypertension among Iraqi Population: A Pilot Study

Authors: Zahraa Albasry, Hadeel D. Najim, Anmar Al-Taie

Abstract:

Background: Hypertension is considered as one of the most important causes of cardiovascular complications and one of the leading causes of worldwide mortality. Identifying the potential risk factors associated with this medical health problem plays an important role in minimizing its incidence and related complications. The objective of this study is to explore the prevalence of receptor sensitivity regarding assess and understand the perception of specific predictive confounding factors on the prevalence of hypertension (HT) among a sample of Iraqi population in Baghdad, Iraq. Materials and Methods: A randomized cross sectional study was carried out on 100 adult subjects during their visit to the outpatient clinic at a certain sector of Baghdad Province, Iraq. Demographic, clinical and health records alongside specific screening and laboratory tests of the participants were collected and analyzed to detect the potential of confounding factors on the prevalence of HT. Results: 63% of the study participants suffered from HT, most of them were female patients (P < 0.005). Patients aged between 41-50 years old significantly suffered from HT than other age groups (63.5%, P < 0.001). 88.9% of the participants were obese (P < 0.001) and 47.6% had diabetes with HT. Positive family history and sedentary lifestyle were significantly higher among all hypertensive groups (P < 0.05). High salt and fatty food intake was significantly found among patients suffered from isolated systolic hypertension (ISHT) (P < 0.05). A significant positive correlation between packed cell volume (PCV) and systolic blood pressure (SBP) (r = 0.353, P = 0.048) found among normotensive participants. Among hypertensive patients, a positive significant correlation found between triglycerides (TG) and both SBP (r = 0.484, P = 0.031) and diastolic blood pressure (DBP) (r = 0.463, P = 0.040), while low density lipoprotein-cholesterol (LDL-c) showed a positive significant correlation with DBP (r = 0.443, P = 0.021). Conclusion: The prevalence of HT among Iraqi populations is of major concern. Further consideration is required to detect the impact of potential risk factors and to minimize blood pressure (BP) elevation and reduce the risk of other cardiovascular complications later in life.

Keywords: Correlation, hypertension, Iraq, risk factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 905
748 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent’s attributes. Also, the influence of social networks in the developing of agents interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: Artificial stock markets, agent based simulation, bounded rationality, behavioral finance, artificial neural network, interaction, scale-free networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2520
747 Long-Term Follow-up of Dynamic Balance, Pain and Functional Performance in Cruciate Retaining and Posterior Stabilized Total Knee Arthroplasty

Authors: Ahmed R. Z. Baghdadi, Mona H. Gamal Eldein

Abstract:

Background: With the perceived pain and poor function experienced following knee arthroplasty, patients usually feel un-satisfied. Yet, a controversy still persists on the appropriate operative technique that doesn’t affect proprioception much. Purpose: This study compared the effects of Cruciate Retaining (CR) and Posterior Stabilized (PS) total knee arthroplasty (TKA on dynamic balance, pain and functional performance following rehabilitation. Methods: Thirty patients with CRTKA (group I), thirty with PSTKA (group II) and fifteen indicated for arthroplasty but weren’t operated on yet (group III) participated in the study. The mean age was 54.53±3.44, 55.13±3.48 and 55.33±2.32 years and BMI 35.7±3.03, 35.7±1.99 and 35.73±1.03 kg/m2 for groups I, II and III respectively. The Berg Balance Scale (BBS), WOMAC pain subscale and Timed Up-and-Go (TUG) and Stair-Climbing (SC) tests were used for assessment. Assessments were conducted four weeks preand post-operatively, three, six and twelve months post-operatively with the control group being assessed at the same time intervals. The post-operative rehabilitation involved hospitalization (1st week), home-based (2nd-4th weeks), and outpatient clinic (5th-12th weeks) programs, follow-up to all groups for twelve months. Results: The Mixed design MANOVA revealed that group I had significantly lower pain scores and SC time compared with group II three, six and twelve months post-operatively. Moreover, the BBS scores increased significantly and the pain scores and TUG and SC time decreased significantly six months post-operatively compared with four weeks pre- and post-operatively and three months postoperatively in groups I and II with the opposite being true four weeks post-operatively. But no significant differences in BBS scores, pain scores and TUG and SC time between six and twelve months postoperatively in groups I and II. Interpretation/Conclusion: CRTKA is preferable to PSTKA, possibly due to the preserved human proprioceptors in the un-excised PCL.

Keywords: Dynamic Balance, Functional Performance, Knee Arthroplasty, Long-Term.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2050
746 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection

Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay

Abstract:

With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.

Keywords: credit card fraud detection, user authentication, behavioral biometrics, machine learning, literature survey

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 519
745 Systems Engineering and Project Management Process Modeling in the Aeronautics Context: Case Study of SMEs

Authors: S. Lemoussu, J. C. Chaudemar, R. A. Vingerhoeds

Abstract:

The aeronautics sector is currently living an unprecedented growth largely due to innovative projects. In several cases, such innovative developments are being carried out by Small and Medium sized-Enterprises (SMEs). For instance, in Europe, a handful of SMEs are leading projects like airships, large civil drones, or flying cars. These SMEs have all limited resources, must make strategic decisions, take considerable financial risks and in the same time must take into account the constraints of safety, cost, time and performance as any commercial organization in this industry. Moreover, today, no international regulations fully exist for the development and certification of this kind of projects. The absence of such a precise and sufficiently detailed regulatory framework requires a very close contact with regulatory instances. But, SMEs do not always have sufficient resources and internal knowledge to handle this complexity and to discuss these issues. This poses additional challenges for those SMEs that have system integration responsibilities and that must provide all the necessary means of compliance to demonstrate their ability to design, produce, and operate airships with the expected level of safety and reliability. The final objective of our research is thus to provide a methodological framework supporting SMEs in their development taking into account recent innovation and institutional rules of the sector. We aim to provide a contribution to the problematic by developing a specific Model-Based Systems Engineering (MBSE) approach. Airspace regulation, aeronautics standards and international norms on systems engineering are taken on board to be formalized in a set of models. This paper presents the on-going research project combining Systems Engineering and Project Management process modeling and taking into account the metamodeling problematic.

Keywords: Aeronautics, certification, process modeling, project management, SME, systems engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1421
744 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

The problems arising from unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many researchers have found that the performance of existing classifiers tends to be biased towards the majority class. The k-nearest neighbors’ nonparametric discriminant analysis is a method that was proposed for classifying unbalanced classes with good performance. In this study, the methods of discriminant analysis are of interest in investigating misclassification error rates for classimbalanced data of three diabetes risk groups. The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification of class-imbalanced data of diabetes risk groups. Data from a project maintaining healthy conditions for 599 employees of a government hospital in Bangkok were obtained for the classification problem. The employees were divided into three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data including the variables of diabetes risk group, age, gender, blood glucose, and BMI were analyzed and bootstrapped for 50 and 100 samples, 599 observations per sample, for additional estimation of the misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples showed nonnormality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. Searching the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions of (0.90:0.05:0.05), (0.80: 0.10: 0.10) and (0.70, 0.15, 0.15). The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k=3 or k=4 and the defined prior probabilities of non-risk: risk: diabetic as 0.90: 0.05:0.05 or 0.80:0.10:0.10 gave the smallest error rate of misclassification. The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: Bootstrap, diabetes risk groups, error rate, k-nearest neighbors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
743 Experimental Study on Using the Aluminum Sacrificial Anode as a Cathodic Protection for Marine Structures

Authors: A. Radwan, A. Elbatran, A. Mehanna, M. Shehadeh

Abstract:

The corrosion is natural chemical phenomenon that is applied in many engineering structures. Hence, it is one of the important topics to study in the engineering research. Ship and offshore structures are most exposed to corrosion due to the presence of corrosive medium of air and the seawater. Consequently, investigation of the corrosion behavior and properties over ship and offshore hulls is one of the important topics to study in the marine engineering research. Using sacrificial anode is the most popular solution for protecting marine structures from corrosion. Hence, this research investigates the extent of corrosion between the composite ship model and relative velocity of water, along with the sacrificial aluminum anode consumption and its degree of protection in seawater. In this study, the consumption rate of sacrificial aluminum anode with respect to relative velocity at different Reynold’s numbers was studied experimentally, and it was found that, the degree of cathodic protection represented by the cathode potential at a given distance from the aluminum anode was decreased slightly with increment of the relative velocity.

Keywords: Corrosion, Reynold’s numbers, sacrificial anode, velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1985
742 Effect of Biostimulants to Control the Phelipanche ramosa L. Pomel in Processing Tomato Crop

Authors: G. Disciglio, G. Gatta, F. Lops, A. Libutti, A. Tarantino, E. Tarantino

Abstract:

The experimental trial was carried out in open field at Foggia district (Apulia Region, Southern Italy), during the spring-summer season 2014, in order to evaluate the effect of four biostimulant products (RadiconÒ, Viormon plusÒ, LysodinÒ and SiaptonÒ 10L), compared with a control (no biostimulant), on the infestation of processing tomato crop (cv Dres) by the chlorophyll-lacking root parasite Phelipanche ramosa. Biostimulants consist in different categories of products (microbial inoculants, humic and fulvic acids, hydrolyzed proteins and aminoacids, seaweed extracts) which play various roles in plant growing, including the improvement of crop resistance and quali-quantitative characteristics of yield. The experimental trial was arranged according to a complete randomized block design with five treatments, each of one replicated three times. The processing tomato seedlings were transplanted on 5 May 2014. Throughout the crop cycle, P. ramosa infestation was assessed according to the number of emerged shoots (branched plants) counted in each plot, at 66, 78 and 92 day after transplanting. The tomato fruits were harvested at full-stage of maturity on 8 August 2014. From each plot, the marketable yield was measured and the quali-quantitative yield parameters (mean weight, dry matter content, colour coordinate, colour index and soluble solids content of the fruits) were determined. The whole dataset was tested according to the basic assumptions for the analysis of variance (ANOVA) and the differences between the means were determined using Tukey’s tests at the 5% probability level. The results of the study showed that none of the applied biostimulants provided a whole control of Phelipanche, although some positive effects were obtained from their application. To this respect, the RadiconÒ appeared to be the most effective in reducing the infestation of this root-parasite in tomato crop. This treatment also gave the higher tomato yield.

Keywords: Biostimulants, control methods, Phelipanche ramosa, processing tomato crop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1894
741 Preparation and Characterization of Silk/Diopside Composite Nanofibers via Electrospinning for Tissue Engineering Application

Authors: Abbas Teimouri, Leila Ghorbanian, Iren Dabirian

Abstract:

This work focused on preparation and characterizations of silk fibroin (SF)/nanodiopside nanoceramic via electrospinning process. Nanofibrous scaffolds were characterized by combined techniques of scanning electron microscopy (SEM), Fourier-transform infrared spectroscopy (FTIR), X-ray diffraction (XRD). The results confirmed that fabricated SF/diopside scaffolds improved cell attachment and proliferation. The results indicated that the electrospun of SF/nanodiopside nanofibrous scaffolds could be considered as ideal candidates for tissue engineering.

Keywords: Electrospinning, nanofibers, silk fibroin, diopside, composite scaffold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1292
740 Performance Improvement of Information System of a Banking System Based on Integrated Resilience Engineering Design

Authors: S. H. Iranmanesh, L. Aliabadi, A. Mollajan

Abstract:

Integrated resilience engineering (IRE) is capable of returning banking systems to the normal state in extensive economic circumstances. In this study, information system of a large bank (with several branches) is assessed and optimized under severe economic conditions. Data envelopment analysis (DEA) models are employed to achieve the objective of this study. Nine IRE factors are considered to be the outputs, and a dummy variable is defined as the input of the DEA models. A standard questionnaire is designed and distributed among executive managers to be considered as the decision-making units (DMUs). Reliability and validity of the questionnaire is examined based on Cronbach's alpha and t-test. The most appropriate DEA model is determined based on average efficiency and normality test. It is shown that the proposed integrated design provides higher efficiency than the conventional RE design. Results of sensitivity and perturbation analysis indicate that self-organization, fault tolerance, and reporting culture respectively compose about 50 percent of total weight.

Keywords: Banking system, data envelopment analysis, DEA, integrated resilience engineering, IRE, performance evaluation, perturbation analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
739 Structural Parsing of Natural Language Text in Tamil Using Phrase Structure Hybrid Language Model

Authors: Selvam M, Natarajan. A M, Thangarajan R

Abstract:

Parsing is important in Linguistics and Natural Language Processing to understand the syntax and semantics of a natural language grammar. Parsing natural language text is challenging because of the problems like ambiguity and inefficiency. Also the interpretation of natural language text depends on context based techniques. A probabilistic component is essential to resolve ambiguity in both syntax and semantics thereby increasing accuracy and efficiency of the parser. Tamil language has some inherent features which are more challenging. In order to obtain the solutions, lexicalized and statistical approach is to be applied in the parsing with the aid of a language model. Statistical models mainly focus on semantics of the language which are suitable for large vocabulary tasks where as structural methods focus on syntax which models small vocabulary tasks. A statistical language model based on Trigram for Tamil language with medium vocabulary of 5000 words has been built. Though statistical parsing gives better performance through tri-gram probabilities and large vocabulary size, it has some disadvantages like focus on semantics rather than syntax, lack of support in free ordering of words and long term relationship. To overcome the disadvantages a structural component is to be incorporated in statistical language models which leads to the implementation of hybrid language models. This paper has attempted to build phrase structured hybrid language model which resolves above mentioned disadvantages. In the development of hybrid language model, new part of speech tag set for Tamil language has been developed with more than 500 tags which have the wider coverage. A phrase structured Treebank has been developed with 326 Tamil sentences which covers more than 5000 words. A hybrid language model has been trained with the phrase structured Treebank using immediate head parsing technique. Lexicalized and statistical parser which employs this hybrid language model and immediate head parsing technique gives better results than pure grammar and trigram based model.

Keywords: Hybrid Language Model, Immediate Head Parsing, Lexicalized and Statistical Parsing, Natural Language Processing, Parts of Speech, Probabilistic Context Free Grammar, Tamil Language, Tree Bank.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3636
738 A Spatial Repetitive Controller Applied to an Aeroelastic Model for Wind Turbines

Authors: Riccardo Fratini, Riccardo Santini, Jacopo Serafini, Massimo Gennaretti, Stefano Panzieri

Abstract:

This paper presents a nonlinear differential model, for a three-bladed horizontal axis wind turbine (HAWT) suited for control applications. It is based on a 8-dofs, lumped parameters structural dynamics coupled with a quasi-steady sectional aerodynamics. In particular, using the Euler-Lagrange Equation (Energetic Variation approach), the authors derive, and successively validate, such model. For the derivation of the aerodynamic model, the Greenbergs theory, an extension of the theory proposed by Theodorsen to the case of thin airfoils undergoing pulsating flows, is used. Specifically, in this work, the authors restricted that theory under the hypothesis of low perturbation reduced frequency k, which causes the lift deficiency function C(k) to be real and equal to 1. Furthermore, the expressions of the aerodynamic loads are obtained using the quasi-steady strip theory (Hodges and Ormiston), as a function of the chordwise and normal components of relative velocity between flow and airfoil Ut, Up, their derivatives, and section angular velocity ε˙. For the validation of the proposed model, the authors carried out open and closed-loop simulations of a 5 MW HAWT, characterized by radius R =61.5 m and by mean chord c = 3 m, with a nominal angular velocity Ωn = 1.266rad/sec. The first analysis performed is the steady state solution, where a uniform wind Vw = 11.4 m/s is considered and a collective pitch angle θ = 0.88◦ is imposed. During this step, the authors noticed that the proposed model is intrinsically periodic due to the effect of the wind and of the gravitational force. In order to reject this periodic trend in the model dynamics, the authors propose a collective repetitive control algorithm coupled with a PD controller. In particular, when the reference command to be tracked and/or the disturbance to be rejected are periodic signals with a fixed period, the repetitive control strategies can be applied due to their high precision, simple implementation and little performance dependency on system parameters. The functional scheme of a repetitive controller is quite simple and, given a periodic reference command, is composed of a control block Crc(s) usually added to an existing feedback control system. The control block contains and a free time-delay system eτs in a positive feedback loop, and a low-pass filter q(s). It should be noticed that, while the time delay term reduces the stability margin, on the other hand the low pass filter is added to ensure stability. It is worth noting that, in this work, the authors propose a phase shifting for the controller and the delay system has been modified as e^(−(T−γk)), where T is the period of the signal and γk is a phase shifting of k samples of the same periodic signal. It should be noticed that, the phase shifting technique is particularly useful in non-minimum phase systems, such as flexible structures. In fact, using the phase shifting, the iterative algorithm could reach the convergence also at high frequencies. Notice that, in our case study, the shifting of k samples depends both on the rotor angular velocity Ω and on the rotor azimuth angle Ψ: we refer to this controller as a spatial repetitive controller. The collective repetitive controller has also been coupled with a C(s) = PD(s), in order to dampen oscillations of the blades. The performance of the spatial repetitive controller is compared with an industrial PI controller. In particular, starting from wind speed velocity Vw = 11.4 m/s the controller is asked to maintain the nominal angular velocity Ωn = 1.266rad/s after an instantaneous increase of wind speed (Vw = 15 m/s). Then, a purely periodic external disturbance is introduced in order to stress the capabilities of the repetitive controller. The results of the simulations show that, contrary to a simple PI controller, the spatial repetitive-PD controller has the capability to reject both external disturbances and periodic trend in the model dynamics. Finally, the nominal value of the angular velocity is reached, in accordance with results obtained with commercial software for a turbine of the same type.

Keywords: Wind turbines, aeroelasticity, repetitive control, periodic systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1284
737 Adding Security Blocks to the DevOps Lifecycle

Authors: Andrew John Zeller, Francis Pouatcha

Abstract:

Working according to the DevOps principle has gained in popularity over the past decade. While its extension DevSecOps started to include elements of cybersecurity, most real-life projects do not focus risk and security until the later phases of a project as teams are often more familiar with engineering and infrastructure services. To help bridge the gap between security and engineering, this paper will take six building blocks of cybersecurity and apply them to the DevOps approach. After giving a brief overview of the stages in the DevOps lifecycle, the main part discusses to what extent six cybersecurity blocks can be utilized in various stages of the lifecycle. The paper concludes with an outlook on how to stay up to date in the dynamic world of cybersecurity.

Keywords: Information security, data security, cybersecurity, DevOps, IT management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84
736 Comprehensive Characteristics of the Municipal Solid Waste Generated in the Faculty of Engineering, UKM

Authors: A. Salsabili, M.Aghajani Mir, S.Saheri, Noor Ezlin Ahmad Basri

Abstract:

The main aims in this research are to study the solid waste generation in the Faculty of Engineering and Built Environment in the UKM and at the same time to determine composition and some of the waste characteristics likewise: moisture content, density, pH and C/N ratio. For this purpose multiple campaigns were conducted to collect the wastes produced in all hostels, faculties, offices and so on, during 24th of February till 2nd of March 2009, measure and investigate them with regard to both physical and chemical characteristics leading to highlight the necessary management policies. Research locations are Faculty of Engineering and the Canteen nearby that. From the result gained, the most suitable solid waste management solution will be proposed to UKM. The average solid waste generation rate in UKM is 203.38 kg/day. The composition of solid waste generated are glass, plastic, metal, aluminum, organic and inorganic waste and others waste. From the laboratory result, the average moisture content, density, pH and C/N ratio values from the solid waste generated are 49.74%, 165.1 kg/m3, 5.3, and 7:1 respectively. Since, the food waste (organic waste) were the most dominant component, around 62% from the total waste generated hence, the most suitable solid waste management solution is composting.

Keywords: Solid Waste, Waste Management, Characterizationand Composition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3588
735 Tools and Techniques in Risk Assessment in Public Risk Management Organisations

Authors: Atousa Khodadadyan, Gabe Mythen, Hirbod Assa, Beverley Bishop

Abstract:

Risk assessment and the knowledge provided through this process is a crucial part of any decision-making process in the management of risks and uncertainties. Failure in assessment of risks can cause inadequacy in the entire process of risk management, which in turn can lead to failure in achieving organisational objectives as well as having significant damaging consequences on populations affected by the potential risks being assessed. The choice of tools and techniques in risk assessment can influence the degree and scope of decision-making and subsequently the risk response strategy. There are various available qualitative and quantitative tools and techniques that are deployed within the broad process of risk assessment. The sheer diversity of tools and techniques available to practitioners makes it difficult for organisations to consistently employ the most appropriate methods. This tools and techniques adaptation is rendered more difficult in public risk regulation organisations due to the sensitive and complex nature of their activities. This is particularly the case in areas relating to the environment, food, and human health and safety, when organisational goals are tied up with societal, political and individuals’ goals at national and international levels. Hence, recognising, analysing and evaluating different decision support tools and techniques employed in assessing risks in public risk management organisations was considered. This research is part of a mixed method study which aimed to examine the perception of risk assessment and the extent to which organisations practise risk assessment’ tools and techniques. The study adopted a semi-structured questionnaire with qualitative and quantitative data analysis to include a range of public risk regulation organisations from the UK, Germany, France, Belgium and the Netherlands. The results indicated the public risk management organisations mainly use diverse tools and techniques in the risk assessment process. The primary hazard analysis; brainstorming; hazard analysis and critical control points were described as the most practiced risk identification techniques. Within qualitative and quantitative risk analysis, the participants named the expert judgement, risk probability and impact assessment, sensitivity analysis and data gathering and representation as the most practised techniques.

Keywords: Decision-making, public risk management organisations, risk assessment, tools and techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629
734 The Characteristics of Static Plantar Loading in the First-Division College Sprint Athletes

Authors: Tong-Hsien Chow

Abstract:

Background: Plantar pressure measurement is an effective method for assessing plantar loading and can be applied to evaluating movement performance of the foot. The purpose of this study is to explore the sprint athletes’ plantar loading characteristics and pain profiles in static standing. Methods: Experiments were undertaken on 80 first-division college sprint athletes and 85 healthy non-sprinters. ‘JC Mat’, the optical plantar pressure measurement was applied to examining the differences between both groups in the arch index (AI), three regional and six distinct sub-regional plantar pressure distributions (PPD), and footprint characteristics. Pain assessment and self-reported health status in sprint athletes were examined for evaluating their common pain areas. Results: Findings from the control group, the males’ AI fell into the normal range. Yet, the females’ AI was classified as the high-arch type. AI values of the sprint group were found to be significantly lower than the control group. PPD were higher at the medial metatarsal bone of both feet and the lateral heel of the right foot in the sprint group, the males in particular, whereas lower at the medial and lateral longitudinal arches of both feet. Footprint characteristics tended to support the results of the AI and PPD, and this reflected the corresponding pressure profiles. For the sprint athletes, the lateral knee joint and biceps femoris were the most common musculoskeletal pains. Conclusions: The sprint athletes’ AI were generally classified as high arches, and that their PPD were categorized between the features of runners and high-arched runners. These findings also correspond to the profiles of patellofemoral pain syndrome (PFPS)-related plantar pressure. The pain profiles appeared to correspond to the symptoms of high-arched runners and PFPS. The findings reflected upon the possible link between high arches and PFPS. The correlation between high-arched runners and PFPS development is worth further studies.

Keywords: Sprint athletes, arch index, plantar pressure distributions, high arches, patellofemoral pain syndrome.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1823
733 Education and Assessment of Civil Employees in e-Government: The Case of a Moodle Based Platform

Authors: Stamatios A. Theocharis, George A. Tsihrintzis

Abstract:

One of the most important factors for the success of e-government is training and preparing the workforce of the public sector. As changes and innovation in the public sector progress at a very slow pace and more slowly than in the private sector, issues related to human resources require special care. This is because the workforce will eventually seize the opportunities of the technological solutions used in e-Government. Thus, the central administration should provide employees with continuous and focused training not only on new technologies but also on a wide range of subjects and also improve interdepartmental interaction.

To achieve all this, new methods and training tools need to be implemented in addition to assessment of the employees. In this spirit, we propose the development of an educational platform with user personalization features. We propose the development of this platform using Moodle as the basic tool. Incorporating a personalization mechanism is very important since different employees have different backgrounds, education levels, computer skills, or different capability to develop further. Key features of the proposed platform include, besides typical e-learning tools, communities organized in order to exchange experiences and knowledge, groups of users based on certain criteria, automatic evaluation of users and potential self-education and self-assessment. In its fully developed form, this platform can be part of a more comprehensive knowledge management system for the public sector.

Keywords: e-Government, civil employees education, education technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1926
732 Reflective Thinking and Experiential Learning: A Quasi-Experimental Quanti-Quali Response to Greater Diversification of Activities and Greater Integration of Student Profiles

Authors: P. Bogas

Abstract:

As a scientific contribution to this discussion, a pedagogical intervention of a quasi-experimental nature was developed, with a mixed methodology, evaluating the intervention within a single curricular unit of Marketing, using cases based on real challenges of brands, business simulation and customer projects. Primary and secondary experiences were incorporated in the intervention: the primary experiences are the experiential activities themselves; the secondary experiences resulted from the primary experience, such as reflection and discussion in work teams. A diversified learning relationship was encouraged through the various connections between the different members of the learning community. The present study concludes that in the same context, the students' response can be described as: students who reinforce the initial deep approach, students who maintain the initial deep approach level and others who change from an emphasis on the deep approach to one closer to superficial. This typology did not always confirm studies reported in the literature, namely, whether the initial level of deep processing would influence the superficial and the opposite. The result of this investigation points to the inclusion of pedagogical and didactic activities that integrate different motivations and initial strategies, leading to a possible adoption of deep approaches to learning, since it revealed statistically significant differences in the difference in the scores of the deep/superficial approach and the experiential level. In the case of real challenges, the categories of “attribution of meaning and meaning of studied” and the possibility of “contact with an aspirational context” for their future professional stand out. In this category, the dimensions of autonomy that will be required of them were also revealed when comparing the classroom context of real cases and the future professional context and the impact they may have on the world. Regarding to the simulated practice, two categories of response stand out: on the one hand, the motivation associated with the possibility of measuring the results of the decisions taken, an awareness of oneself and, on the other hand, the additional effort that this practice required for some of the students.

Keywords: Experiential learning, higher education, marketing, mixed methods, reflective thinking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 289
731 Expert Witness Testimony in the Battered Woman Syndrome

Authors: Ana Pauna

Abstract:

The Expert Witness Testimony in the Battered Woman Syndrome Expert witness testimony (EWT) is a kind of information given by an expert specialized in the field (here in BWS) to the jury in order to help the court better understand the case. EWT does not always work in favor of the battered women. Two main decision-making models are discussed in the paper: the Mathematical model and the Explanation model. In the first model, the jurors calculate ″the importance and strength of each piece of evidence″ whereas in the second model they try to integrate the EWT with the evidence and create a coherent story that would describe the crime. The jury often misunderstands and misjudges battered women for their action (or in this case inaction). They assume that these women are masochists and accept being mistreated for if a man abuses a woman constantly, she should and could divorce him or simply leave at any time. The research in the domain found that indeed, expert witness testimony has a powerful influence on juror’s decisions thus its quality needs to be further explored. One of the important factors that need further studies is a bias called the dispositionist worldview (a belief that what happens to people is of their own doing). This kind of attributional bias represents a tendency to think that a person’s behavior is due to his or her disposition, even when the behavior is clearly attributed to the situation. Hypothesis The hypothesis of this paper is that if a juror has a dispositionist worldview then he or she will blame the rape victim for triggering the assault. The juror would therefore commit the fundamental attribution error and believe that the victim’s disposition caused the rape and not the situation she was in. Methods The subjects in the study were 500 randomly sampled undergraduate students from McGill, Concordia, Université de Montréal and UQAM. Dispositional Worldview was scored on the Dispositionist Worldview Questionnaire. After reading the Rape Scenarios, each student was asked to play the role of a juror and answer a questionnaire consisting of 7 questions about the responsibility, causality and fault of the victim. Results The results confirm the hypothesis which states that if a juror has a dispositionist worldview then he or she will blame the rape victim for triggering the assault. By doing so, the juror commits the fundamental attribution error because he will believe that the victim’s disposition, and not the constraints or opportunities of the situation, caused the rape scenario.

Keywords: bias, expert/witness testimony, attribution error, jury, rape myth

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2166
730 Self-Tuning Power System Stabilizer Based on Recursive Least Square Identification and Linear Quadratic Regulator

Authors: J. Ritonja

Abstract:

Available commercial applications of power system stabilizers assure optimal damping of synchronous generator’s oscillations only in a small part of operating range. Parameters of the power system stabilizer are usually tuned for the selected operating point. Extensive variations of the synchronous generator’s operation result in changed dynamic characteristics. This is the reason that the power system stabilizer tuned for the nominal operating point does not satisfy preferred damping in the overall operation area. The small-signal stability and the transient stability of the synchronous generators have represented an attractive problem for testing different concepts of the modern control theory. Of all the methods, the adaptive control has proved to be the most suitable for the design of the power system stabilizers. The adaptive control has been used in order to assure the optimal damping through the entire synchronous generator’s operating range. The use of the adaptive control is possible because the loading variations and consequently the variations of the synchronous generator’s dynamic characteristics are, in most cases, essentially slower than the adaptation mechanism. The paper shows the development and the application of the self-tuning power system stabilizer based on recursive least square identification method and linear quadratic regulator. Identification method is used to calculate the parameters of the Heffron-Phillips model of the synchronous generator. On the basis of the calculated parameters of the synchronous generator’s mathematical model, the synthesis of the linear quadratic regulator is carried-out. The identification and the synthesis are implemented on-line. In this way, the self-tuning power system stabilizer adapts to the different operating conditions. A purpose of this paper is to contribute to development of the more effective power system stabilizers, which would replace currently used linear stabilizers. The presented self-tuning power system stabilizer makes the tuning of the controller parameters easier and assures damping improvement in the complete operating range. The results of simulations and experiments show essential improvement of the synchronous generator’s damping and power system stability.

Keywords: Adaptive control, linear quadratic regulator, power system stabilizer, recursive least square identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1109
729 Improving Knowledge Management Practices in the South African Healthcare System

Authors: Kgabo H. Badimo, Sheryl Buckley

Abstract:

Knowledge is increasingly recognised in this, the knowledge era, as a strategic resource, by public sector organisations, in view of the public sector reform initiatives. People and knowledge play a vital role in attaining improved organisational performance and high service quality. Many government departments in the public sector have started to realise the importance of knowledge management in streamlining their operations and processes. This study focused on knowledge management in the public healthcare service organisations, where the concept of service provider competitiveness pales to insignificance, considering the huge challenges emanating from the healthcare and public sector reforms. Many government departments are faced with challenges of improving organisational performance and service delivery, improving accountability, making informed decisions, capturing the knowledge of the aging workforce, and enhancing partnerships with stakeholders. The purpose of this paper is to examine the knowledge management practices of the Gauteng Department of Health in South Africa, in order to understand how knowledge management practices influence improvement in organisational performance and healthcare service delivery. This issue is explored through a review of literature on dominant views on knowledge management and healthcare service delivery, as well as results of interviews with, and questionnaire responses from, the general staff of the Gauteng Department of Health. Web-based questionnaires, face-to-face interviews and organisational documents were used to collect data. The data were analysed using both the quantitative and qualitative methods. The central question investigated was: To what extent can the conditions required for successful knowledge management be observed, in order to improve organisational performance and healthcare service delivery in the Gauteng Department of Health. The findings showed that the elements of knowledge management capabilities investigated in this study, namely knowledge creation, knowledge sharing and knowledge application, have a positive, significant relationship with all measures of organisational performance and healthcare service delivery. These findings thus indicate that by employing knowledge management principles, the Gauteng Department of Health could improve its ability to achieve its operational goals and objectives, and solve organisational and healthcare challenges, thereby improving organisational performance and enhancing healthcare service delivery in Gauteng.

Keywords: Knowledge Management, Healthcare Service Delivery, Public Healthcare, Public Sector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4517
728 Application of Computer Aided Engineering Tools in Performance Prediction and Fault Detection of Mechanical Equipment of Mining Process Line

Authors: K. Jahani, J. Razavi

Abstract:

Nowadays, to decrease the number of downtimes in the industries such as metal mining, petroleum and chemical industries, predictive maintenance is crucial. In order to have efficient predictive maintenance, knowing the performance of critical equipment of production line such as pumps and hydro-cyclones under variable operating parameters, selecting best indicators of this equipment health situations, best locations for instrumentation, and also measuring of these indicators are very important. In this paper, computer aided engineering (CAE) tools are implemented to study some important elements of copper process line, namely slurry pumps and cyclone to predict the performance of these components under different working conditions. These modeling and simulations can be used in predicting, for example, the damage tolerance of the main shaft of the slurry pump or wear rate and location of cyclone wall or pump case and impeller. Also, the simulations can suggest best-measuring parameters, measuring intervals, and their locations.

Keywords: Computer aided engineering, predictive maintenance, fault detection, mining process line, slurry pump, hydrocyclone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
727 Enhancing Cooperation Between LEAs and Citizens: The INSPEC2T Approach

Authors: George Leventakis, George Kokkinis, Nikos Moustakidis, George Papalexandratos, Ioanna Vasiliadou

Abstract:

Enhancing the feeling of public safety and crime prevention are tasks customarily assigned to the Police. Police departments have, however, recognized that traditional ways of policing methods are becoming obsolete; Community Policing (CP) philosophy; however, when applied appropriately, leads to seamless collaboration between various stakeholders like the Police, NGOs and the general public and provides the opportunity to identify risks, assist in solving problems of crime, disorder, safety and crucially contribute to improving the quality of life for everyone in a community. Social Media, on the other hand, due to its high level of infiltration in modern life, constitutes a powerful mechanism which offers additional and direct communication channels to reach individuals or communities. These channels can be utilized to improve the citizens’ perception of the Police and to capture individual and community needs, when their feedback is taken into account by Law Enforcement Agencies (LEAs) in a structured and coordinated manner. This paper presents research conducted under INSPEC2T (Inspiring CitizeNS Participation for Enhanced Community PoliCing AcTions), a project funded by the European Commission’s research agenda to bridge the gap between CP as a philosophy and as an organizational strategy, capitalizing on the use of Social Media. The project aims to increase transparency, trust, police accountability, and the role of civil society. It aspires to build strong, trusting relationships between LEAs and the public, supporting two-way, contemporary communication while at the same time respecting anonymity of all affected parties. Results presented herein summarize the outcomes of four online multilingual surveys, focus group interviews, desktop research and interviews with experts in the field of CP practices. The above research activities were conducted in various EU countries aiming to capture requirements of end users from diverse backgrounds (social, cultural, legal and ethical) and determine public expectations regarding CP, community safety and crime prevention.

Keywords: Community partnerships, next generation community policing, public safety, social media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523
726 Conflation Methodology Applied to Flood Recovery

Authors: E. L. Suarez, D. E. Meeroff, Y. Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: Community resilience, conflation, flood risk, nuisance flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 111
725 Evaluating the Small-Strain Mechanical Properties of Cement-Treated Clayey Soils Based on the Confining Pressure

Authors: M. A. Putera, N. Yasufuku, A. Alowaisy, R. Ishikura, J. G. Hussary, A. Rifa’i

Abstract:

Indonesia’s government has planned a project for a high-speed railway connecting the capital cities, Jakarta and Surabaya, about 700 km. Based on that location, it has been planning construction above the lowland soil region. The lowland soil region comprises cohesive soil with high water content and high compressibility index, which in fact, led to a settlement problem. Among the variety of railway track structures, the adoption of the ballastless track was used effectively to reduce the settlement; it provided a lightweight structure and minimized workspace. Contradictorily, deploying this thin layer structure above the lowland area was compensated with several problems, such as lack of bearing capacity and deflection behavior during traffic loading. It is necessary to combine with ground improvement to assure a settlement behavior on the clayey soil. Reflecting on the assurance of strength increment and working period, those were convinced by adopting methods such as cement-treated soil as the substructure of railway track. Particularly, evaluating mechanical properties in the field has been well known by using the plate load test and cone penetration test. However, observing an increment of mechanical properties has uncertainty, especially for evaluating cement-treated soil on the substructure. The current quality control of cement-treated soils was established by laboratory tests. Moreover, using small strain devices measurement in the laboratory can predict more reliable results that are identical to field measurement tests. Aims of this research are to show an intercorrelation of confining pressure with the initial condition of the Young’s modulus (E0), Poisson ratio (υ0) and Shear modulus (G0) within small strain ranges. Furthermore, discrepancies between those parameters were also investigated. Experimental result confirmed the intercorrelation between cement content and confining pressure with a power function. In addition, higher cement ratios have discrepancies, conversely with low mixing ratios.

Keywords: Cement content, confining pressure, high-speed railway, small strain ranges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 402
724 Machine Learning Techniques for Short-Term Rain Forecasting System in the Northeastern Part of Thailand

Authors: Lily Ingsrisawang, Supawadee Ingsriswang, Saisuda Somchit, Prasert Aungsuratana, Warawut Khantiyanan

Abstract:

This paper presents the methodology from machine learning approaches for short-term rain forecasting system. Decision Tree, Artificial Neural Network (ANN), and Support Vector Machine (SVM) were applied to develop classification and prediction models for rainfall forecasts. The goals of this presentation are to demonstrate (1) how feature selection can be used to identify the relationships between rainfall occurrences and other weather conditions and (2) what models can be developed and deployed for predicting the accurate rainfall estimates to support the decisions to launch the cloud seeding operations in the northeastern part of Thailand. Datasets collected during 2004-2006 from the Chalermprakiat Royal Rain Making Research Center at Hua Hin, Prachuap Khiri khan, the Chalermprakiat Royal Rain Making Research Center at Pimai, Nakhon Ratchasima and Thai Meteorological Department (TMD). A total of 179 records with 57 features was merged and matched by unique date. There are three main parts in this work. Firstly, a decision tree induction algorithm (C4.5) was used to classify the rain status into either rain or no-rain. The overall accuracy of classification tree achieves 94.41% with the five-fold cross validation. The C4.5 algorithm was also used to classify the rain amount into three classes as no-rain (0-0.1 mm.), few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall accuracy of classification tree achieves 62.57%. Secondly, an ANN was applied to predict the rainfall amount and the root mean square error (RMSE) were used to measure the training and testing errors of the ANN. It is found that the ANN yields a lower RMSE at 0.171 for daily rainfall estimates, when compared to next-day and next-2-day estimation. Thirdly, the ANN and SVM techniques were also used to classify the rain amount into three classes as no-rain, few-rain, and moderate-rain as above. The results achieved in 68.15% and 69.10% of overall accuracy of same-day prediction for the ANN and SVM models, respectively. The obtained results illustrated the comparison of the predictive power of different methods for rainfall estimation.

Keywords: Machine learning, decision tree, artificial neural network, support vector machine, root mean square error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3215
723 Engineering Geological Characteristics of Soil Materials, East Nile Delta, Egypt

Authors: A. I. M. Ismail, N. Ryden

Abstract:

This paper is concerned with the study of mineralogy and engineering characteristics of soil materials derived from the eastern part of Nile Delta. The clay minerals of the studied soil by using X- ray diffraction are mainly illite (average 72.6 %) and kaolinite (average 2.6 %), expandable portion in illite-smectite mixed layer (average 7 %). Smectite is more abundant in fluviatile clays, whereas kaolinite is more abundant in lagoonal clays. On the other hand, illite and illite-smectite are more abundant in marine clays. The geotechnical results show that the soil under study consists mainly of about 0.3 % gravel, 5 % sand, 51.5 % silt and 42.2 % clay in average. The average shrinkage limit attains 11 % whereas the average value of the plasticity index is 23.4 %. The free swelling ranges from 40 % to 75 % and has a value of 55 % giving an indication about the inadequacy of such soil under foundations. From a construction point of view, the soil under investigation poses many problems even under light foundations due to the swelling and shrinkage. Such swelling and shrinkage is due to the high content of soil materials in the expandable clay minerals of illite and smectite. Based on the results of the present and earlier studies, trial application of soil stabilisation is recommended.

Keywords: Engineering Geological Investigations, Nile Delta, Swelling, Shrinkage

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3765
722 Adaptive WiFi Fingerprinting for Location Approximation

Authors: Mohd Fikri Azli bin Abdullah, Khairul Anwar bin Kamarul Hatta, Esther Jeganathan

Abstract:

WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.

Keywords: Adaptive Repository, Artificial Neural Network, Location Estimation, Nearest Neighbour Euclidean Distance, WiFi RSSI Fingerprinting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3452
721 Solutions for Comfort and Safety on Vibrations Resulting from the Action of the Wind on the Building in the Form of Portico with Four Floors

Authors: G. B. M. Carvalho, V. A. C. Vale, E. T. L. Cöuras Ford

Abstract:

With the aim of increasing the levels of comfort and security structures, the study of dynamic loads on buildings has been one of the focuses in the area of control engineering, civil engineering and architecture. Thus, this work presents a study based on simulation of the dynamics of buildings in the form of portico subjected to wind action, besides presenting an action of passive control, using for this the dynamics of the structure, consequently representing a system appropriated on environmental issues. These control systems are named the dynamic vibration absorbers.

Keywords: Dynamic vibration absorber, structure, comfort, safety, wind behavior, structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 759
720 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: Time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562