Search results for: high level language
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8908

Search results for: high level language

388 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology

Authors: Sanjeev Kumar Appicharla

Abstract:

This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety critical incident to raise awareness of biases in systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the Methodology used to model and analyse the safety-critical incident. The SIRI Methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the Management Oversight and Risk Tree technique. The benefits of the SIRI Methodology are threefold: first is that it incorporates “Heuristics and Biases” approach, in the Management Oversight and Risk Tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling technique. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organisational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signalling firms and transport planners, and front-line staff such that lessons learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner’s and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision making and risk management processes and practices in the IEC 15288 Systems Engineering standard, and in the industrial context such as the GB railways and Artificial Intelligence (AI) contexts as well.

Keywords: Accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 382
387 Social Work Practice to Labour Welfare: A Proposed Model of Field Work Practicum and Role of Social Worker in India

Authors: Naeem Ahmed

Abstract:

Social work is a professional activity based on the approach of “helping people to help themselves” (Stroup). Social work education and practice both are based on humanitarian philosophy in which social workers try to increase the happiness of the society and to reduce the problems of society. Labour welfare is a specialised field of social work which especially focuses on welfare of organised and unorganised labour. In India labour is facing numerous problems in both organised and unorganised sectors because of ignorance, illiteracy, high rate of unemployment etc. In most of the Indian social work institutions we have this specialization with different names like Human Resource Management or Industrial Relation and Personnel Management or Industrial Relations and Labour Welfare or Industrial Social Work etc. Field work practice is integrated part of social work education curriculum in all specialised field. In India we have different field work practice models being followed in different institutions. The main objective of this paper is to prepare a universal field work practicum model in the field of labour welfare. This paper is exploratory in nature, researcher used personal experience and secondary data (model of field work practice in different institutions like Aligarh Muslim University, Pondicherry University, Central University of Karnataka, University of Lucknow, MJP Rohilkhand University Bareilly etc.) Researcher found that there is an immediate need to upgrade the curriculum or field work practice in this particular field, as more than 40 percent of total population engaged in either unorganised or organised sector (NSSO 2011-12) and they are not aware about their rights. In this way a social worker can play an important role in existing labour welfare facilities by making them aware.

Keywords: Fieldwork, labour welfare, organised labour, social work practice, unorganised labour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2867
386 A Design for Customer Preferences Model by Cluster Analysis of Geometric Features and Customer Preferences

Authors: Yuan-Jye Tseng, Ching-Yen Chen

Abstract:

In the design cycle, a main design task is to determine the external shape of the product. The external shape of a product is one of the key factors that can affect the customers’ preferences linking to the motivation to buy the product, especially in the case of a consumer electronic product such as a mobile phone. The relationship between the external shape and the customer preferences needs to be studied to enhance the customer’s purchase desire and action. In this research, a design for customer preferences model is developed for investigating the relationships between the external shape and the customer preferences of a product. In the first stage, the names of the geometric features are collected and evaluated from the data of the specified internet web pages using the developed text miner. The key geometric features can be determined if the number of occurrence on the web pages is relatively high. For each key geometric feature, the numerical values are explored using the text miner to collect the internet data from the web pages. In the second stage, a cluster analysis model is developed to evaluate the numerical values of the key geometric features to divide the external shapes into several groups. Several design suggestion cases can be proposed, for example, large model, mid-size model, and mini model, for designing a mobile phone. A customer preference index is developed by evaluating the numerical data of each of the key geometric features of the design suggestion cases. The design suggestion case with the top ranking of the customer preference index can be selected as the final design of the product. In this paper, an example product of a notebook computer is illustrated. It shows that the external shape of a product can be used to drive customer preferences. The presented design for customer preferences model is useful for determining a suitable external shape of the product to increase customer preferences.

Keywords: Cluster analysis, customer preferences, design evaluation, design for customer preferences, product design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 779
385 A National Survey of Clinical Psychology Graduate Student Attitudes toward Psychotherapy Treatment Manuals: A Replication Study

Authors: B. Bergström, A. Ladd, A. Jones, L. Rosso, P. Michael

Abstract:

Attitudes toward treatment manuals serve as a meaningful predictor of general attitudes toward evidence-based practice. Despite demonstrating high effectiveness in treating many mental disorders, manualized treatments have been underutilized by practitioners. Thus, one can assess the state of the field regarding the adoption of evidence-based practices by surveying practitioner attitudes towards manualized treatments. This study is an adapted replication that assesses psychology graduate student attitudes towards manualized treatments, as a general marker for attitudes towards evidence-based practice. Training programs provide future clinicians with the foundation for critical skills in clinical practice. Research demonstrates that post-graduate continuing education has little to no effect on clinical practice; thus, graduate programs serve as the primary, and often final platform for all future practice. However, there are little empirical data identifying the attitudes and training of graduate students in utilizing manualized treatments. The empirical analysis of this study indicates an increase in positive attitudes among graduate student attitudes towards manualized treatments (within the United States), when compared to past surveys of professional psychologists. Findings from this study may inform graduate programs of barriers for students in developing positive attitudes toward manualized treatments and evidence-based practice. This study also serves as a preliminary predictor of the state-of-the field, in regards to professional psychologists attitudes towards evidence-based practice, if attitudes remain stable. This study indicates that the attitudes toward utilizing evidence-based practices, such as treatment manuals, has become more positive since year 2000.

Keywords: Evidence based treatment, Future of clinical science, Manualized treatment, Student attitudes towards evidence based treatments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 830
384 Simulation and Workspace Analysis of a Tripod Parallel Manipulator

Authors: A. Arockia Selvakumar, R. Sivaramakrishnan, Srinivasa Karthik.T.V, Valluri Siva Ramakrishna, B.Vinodh.

Abstract:

Industrial robots play a vital role in automation however only little effort are taken for the application of robots in machining work such as Grinding, Cutting, Milling, Drilling, Polishing etc. Robot parallel manipulators have high stiffness, rigidity and accuracy, which cannot be provided by conventional serial robot manipulators. The aim of this paper is to perform the modeling and the workspace analysis of a 3 DOF Parallel Manipulator (3 DOF PM). The 3 DOF PM was modeled and simulated using 'ADAMS'. The concept involved is based on the transformation of motion from a screw joint to a spherical joint through a connecting link. This paper work has been planned to model the Parallel Manipulator (PM) using screw joints for very accurate positioning. A workspace analysis has been done for the determination of work volume of the 3 DOF PM. The position of the spherical joints connected to the moving platform and the circumferential points of the moving platform were considered for finding the workspace. After the simulation, the position of the joints of the moving platform was noted with respect to simulation time and these points were given as input to the 'MATLAB' for getting the work envelope. Then 'AUTOCAD' is used for determining the work volume. The obtained values were compared with analytical approach by using Pappus-Guldinus Theorem. The analysis had been dealt by considering the parameters, link length and radius of the moving platform. From the results it is found that the radius of moving platform is directly proportional to the work volume for a constant link length and the link length is also directly proportional to the work volume, at a constant radius of the moving platform.

Keywords: Three Degrees of freedom Parallel Manipulator (3DOF PM), ADAMS, Work volume, MATLAB, AUTOCAD, Pappus- Guldinus Theorem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2996
383 A Study of Shear Stress Intensity Factor of PP and HDPE by a Modified Experimental Method together with FEM

Authors: Md. Shafiqul Islam, Abdullah Khan, Sharon Kao-Walter, Li Jian

Abstract:

Shear testing is one of the most complex testing areas where available methods and specimen geometries are different from each other. Therefore, a modified shear test specimen (MSTS) combining the simple uniaxial test with a zone of interest (ZOI) is tested which gives almost the pure shear. In this study, material parameters of polypropylene (PP) and high density polyethylene (HDPE) are first measured by tensile tests with a dogbone shaped specimen. These parameters are then used as an input for the finite element analysis. Secondly, a specially designed specimen (MSTS) is used to perform the shear stress tests in a tensile testing machine to get the results in terms of forces and extension, crack initiation etc. Scanning Electron Microscopy (SEM) is also performed on the shear fracture surface to find material behavior. These experiments are then simulated by finite element method and compared with the experimental results in order to confirm the simulation model. Shear stress state is inspected to find the usability of the proposed shear specimen. Finally, a geometry correction factor can be established for these two materials in this specific loading and geometry with notch using Linear Elastic Fracture Mechanics (LEFM). By these results, strain energy of shear failure and stress intensity factor (SIF) of shear of these two polymers are discussed in the special application of the screw cap opening of the medical or food packages with a temper evidence safety solution.

Keywords: Shear test specimen, Stress intensity factor, Finite Element simulation, Scanning electron microscopy, Screw cap opening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2924
382 Olive Leaves Extract Restored the antioxidant Perturbations in Red Blood Cells Hemolysate in Streptozotocin Induced Diabetic Rats

Authors: Ismail I. Abo Ghanema, Kadry M. Sadek

Abstract:

Oxidative stress and overwhelming free radicals associated with diabetes mellitus are likely to be linked with development of certain complication such as retinopathy, nephropathy and neuropathy. Treatment of diabetic subjects with antioxidant may be of advantage in attenuating these complications. Olive leaf (Oleaeuropaea), has been endowed with many beneficial and health promoting properties mostly linked to its antioxidant activity. This study aimed to evaluate the significance of supplementation of Olive leaves extract (OLE) in reducing oxidative stress, hyperglycemia and hyperlipidemia in Sterptozotocin (STZ)- induced diabetic rats. After induction of diabetes, a significant rise in plasma glucose, lipid profiles except High density lipoproteincholestrol (HDLc), malondialdehyde (MDA) and significant decrease of plasma insulin, HDLc and Plasma reduced glutathione GSH as well as alteration in enzymatic antioxidants was observed in all diabetic animals. During treatment of diabetic rats with 0.5g/kg body weight of Olive leaves extract (OLE) the levels of plasma (MDA) ,(GSH), insulin, lipid profiles along with blood glucose and erythrocyte enzymatic antioxidant enzymes were significantly restored to establish values that were not different from normal control rats. Untreated diabetic rats on the other hand demonstrated persistent alterations in the oxidative stress marker (MDA), blood glucose, insulin, lipid profiles and the antioxidant parameters. These results demonstrate that OLE may be of advantage in inhibiting hyperglycemia, hyperlipidemia and oxidative stress induced by diabetes and suggest that administration of OLE may be helpful in the prevention or at least reduced of diabetic complications associated with oxidative stress.

Keywords: Diabetes mellitus, olive leaves, oxidative stress, red blood cells

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3065
381 A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program

Authors: A. Esmaili Torshabi, A. Terakawa, K. Ishii, H. Yamazaki, S. Matsuyama, Y. Kikuchi, M. Nakhostin, H. Sabet, A. Ishizaki, W. Yamashita, T. Togashi, J. Arikawa, H. Akiyama, K. Koyata

Abstract:

The purpose of this study is to introduce a new interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues in proton therapy. This interface program was developed under MATLAB software and includes a friendly graphical user interface with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton beam. The result of the mentioned technique is a number of nonoverlapped squares with different sizes in every image. By this way the resolution of image segmentation is high enough in and near heterogeneous areas to preserve the precision of dose calculations and is low enough in homogeneous areas to reduce the number of cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.

Keywords: Monte Carlo, CT images, Quadtree decomposition, Interface program, Proton beam

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873
380 The Role of Fluid Catalytic Cracking in Process Optimisation for Petroleum Refineries

Authors: Chinwendu R. Nnabalu, Gioia Falcone, Imma Bortone

Abstract:

Petroleum refining is a chemical process in which the raw material (crude oil) is converted to finished commercial products for end users. The fluid catalytic cracking (FCC) unit is a key asset in refineries, requiring optimised processes in the context of engineering design. Following the first stage of separation of crude oil in a distillation tower, an additional 40 per cent quantity is attainable in the gasoline pool with further conversion of the downgraded product of crude oil (residue from the distillation tower) using a catalyst in the FCC process. Effective removal of sulphur oxides, nitrogen oxides, carbon and heavy metals from FCC gasoline requires greater separation efficiency and involves an enormous environmental significance. The FCC unit is primarily a reactor and regeneration system which employs cyclone systems for separation.  Catalyst losses in FCC cyclones lead to high particulate matter emission on the regenerator side and fines carryover into the product on the reactor side. This paper aims at demonstrating the importance of FCC unit design criteria in terms of technical performance and compliance with environmental legislation. A systematic review of state-of-the-art FCC technology was carried out, identifying its key technical challenges and sources of emissions.  Case studies of petroleum refineries in Nigeria were assessed against selected global case studies. The review highlights the need for further modelling investigations to help improve FCC design to more effectively meet product specification requirements while complying with stricter environmental legislation.

Keywords: Design, emissions, fluid catalytic cracking, petroleum refineries.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 889
379 Achieving Sustainable Agriculture with Treated Municipal Wastewater

Authors: Reshu Yadav, Himanshu Joshi, S. K.Tripathi

Abstract:

A pilot field study was conducted at the Jagjeetpur Municipal Sewage treatment plant situated in the Haridwar town in Uttarakhand state, India. The objectives of the present study were to study the effect of treated wastewater on the production of various paddy varieties (Sharbati, PR-114, PB-1, Menaka, PB1121 and PB 1509) and the emission of GHG gases (CO2, CH4 and N2O) as compared to the same varieties grown in the control plots irrigated with fresh water. Of late, the concept of water footprint assessment has emerged, which explains enumeration of various types of water footprints of an agricultural entity from its production to processing stages. Paddy, the most water demanding staple crop of Uttarakhand state, displayed a high green water footprint value of 2474.12 m3/ Ton. Most of the wastewater irrigated varieties displayed up to 6% increase in production, except Menaka and PB-1121, which showed a reduction in production (6% and 3% respectively), due to pest and insect infestation. The treated wastewater was observed to be rich in Nitrogen (55.94 mg/ml Nitrate), Phosphorus (54.24 mg/ml) and Potassium (9.78 mg/ml), thus rejuvenating the soil quality and not requiring any external nutritional supplements. A Percentage increase of GHG gases of irrigation with treated municipal wastewater as compared to control plots was observed as 0.4% - 8.6% (CH4), 1.1% - 9.2% (CO2), and 0.07% - 5.8% (N2O). The variety, Sharbati, displayed maximum production (5.5 ton/ha) and emerged as the most resistant variety against pests and insects. The emission values of CH4, CO2 and N2O were 729.31 mg/m2/d, 322.10 mg/m2/d and 400.21 mg/m2/d in water stagnant condition. This study highlighted a successful possibility of reuse of wastewater for non-potable purposes offering the potential for exploiting this resource that can replace or reduce the existing use of fresh water sources in agriculture sector.

Keywords: Greenhouse gases, nutrients, water footprint, wastewater irrigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1861
378 Lamb Wave Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing and Structural Health Monitoring for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average bit error percentage. Results has shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: Lamb Wave Communication, wireless communication, coherent demodulation, bit error percentage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 563
377 Guidelines for Sustainable Urban Mobility in Historic Districts from International Experiences

Authors: Tamer ElSerafi

Abstract:

In recent approaches to heritage conservation, the whole context of historic areas becomes as important as the single historic building. This makes the provision of infrastructure and network of mobility an effective element in the urban conservation. Sustainable urban conservation projects consider the high density of activities, the need for a good quality access system to the transit system, and the importance of the configuration of the mobility network by identifying the best way to connect the different districts of the urban area through a complex unique system that helps the synergic development to achieve a sustainable mobility system. A sustainable urban mobility is a key factor in maintaining the integrity between socio-cultural aspects and functional aspects. This paper illustrates the mobility aspects, mobility problems in historic districts, and the needs of the mobility systems in the first part. The second part is a practical analysis for different mobility plans. It is challenging to find innovative and creative conservation solutions fitting modern uses and needs without risking the loss of inherited built resources. Urban mobility management is becoming an essential and challenging issue in the urban conservation projects. Depending on literature review and practical analysis, this paper tries to define and clarify the guidelines for mobility management in historic districts as a key element in sustainability of urban conservation and development projects. Such rules and principles could control the conflict between the socio–cultural and economic activities, and the different needs for mobility in these districts in a sustainable way. The practical analysis includes a comparison between mobility plans which have been implemented in four different cities; Freiburg in Germany, Zurich in Switzerland and Bray Town in Ireland. This paper concludes with a matrix of guidelines that considers both principles of sustainability and livability factors in urban historic districts.

Keywords: Sustainable mobility, urban mobility, mobility management, historic districts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 962
376 Biodegradation of Malathion by Acinetobacter baumannii Strain AFA Isolated from Domestic Sewage in Egypt

Authors: Ahmed F. Azmy , Amal E. Saafan, Tamer M. Essam, Magdy A. Amin, Shaban H. Ahmed

Abstract:

Bacterial strains capable of degradation of malathion from the domestic sewage were isolated by an enrichment culture technique. Three bacterial strains were screened and identified as Acinetobacter baumannii (AFA), Pseudomonas aeruginosa (PS1), and Pseudomonas mendocina (PS2) based on morphological, biochemical identification and 16S rRNA sequence analysis. Acinetobacter baumannii AFA was the most efficient malathion degrading bacterium, so used for further biodegradation study. AFA was able to grow in mineral salt medium (MSM) supplemented with malathion (100 mg/l) as a sole carbon source, and within 14 days, 84% of the initial dose was degraded by the isolate measured by high performance liquid chromatography. Strain AFA could also degrade other organophosphorus compounds including diazinon, chlorpyrifos and fenitrothion. The effect of different culture conditions on the degradation of malathion like inoculum density, other carbon or nitrogen sources, temperature and shaking were examined. Degradation of malathion and bacterial cell growth were accelerated when culture media were supplemented with yeast extract, glucose and citrate. The optimum conditions for malathion degradation by strain AFA were; an inoculum density of 1.5x 10^12CFU/ml at 30°C with shaking. A specific polymerase chain reaction primers were designed manually using multiple sequence alignment of the corresponding carboxylesterase enzymes of Acinetobacter species. Sequencing result of amplified PCR product and phylogenetic analysis showed low degree of homology with the other carboxylesterase enzymes of Acinetobacter strains, so we suggested that this enzyme is a novel esterase enzyme. Isolated bacterial strains may have potential role for use in bioremediation of malathion contaminated.

Keywords: Acinetobacter baumannii, biodegradation, Malathion, organophosphate pesticides.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3512
375 A Case Study on the Efficacy of Technical Laboratory Safety in Polytechnic

Authors: Zulhisyam Salleh, Erita M. Mazlan, Saiful A. Mazlan, Norzainariah A. Hassan, Fizatul A. Patakor

Abstract:

Technical laboratories are typically considered as highly hazardous places in the polytechnic institution when addressing the problems of high incidences and fatality rates. In conjunction with several topics covered in the technical curricular, safety and health precaution should be highlighted in order to connect to few key ideas of being safe. Therefore the assessment of safety awareness in terms of safety and health about hazardous and risks at laboratories is needed and has to be incorporated with technical education and other training programmes. The purpose of this study was to determine the efficacy of technical laboratory safety in one of the polytechnics in northern region. The study examined three related issues that were; the availability of safety material and equipment, safety practice adopted by technical teachers and administrator-s safety attitudes in enforcing safety to the students. A model of efficacy technical laboratory was developed to test the linear relationship between existing safety material and equipment, teachers- safety practice and administrators- attitude in enforcing safety and to identify which of technical laboratory safety issues was the most pertinent factor to realize safety in technical laboratory. This was done by analyzing survey-based data sets particularly those obtained from samples of 210 students in the polytechnic. The Pearson Correlation was used to measure the association between the variables and to test the research hypotheses. The result of the study has found that there was a significant correlation between existing safety material and equipment, safety practice adopted by teacher and administrator-s attitude. There was also a significant relationship between technical laboratory safety and safety practice adopted by teacher and between technical laboratory safety and administrator attitude. Hence, safety practice adopted by teacher and administrator attitude is vital in realizing technical laboratory safety.

Keywords: Polytechnic, Safety attitudes, Safety practices, Technical laboratory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2430
374 Analysis of Non-Conventional Roundabout Performance in Mixed Traffic Conditions

Authors: Guneet Saini, Shahrukh, Sunil Sharma

Abstract:

Traffic congestion is the most critical issue faced by those in the transportation profession today. Over the past few years, roundabouts have been recognized as a measure to promote efficiency at intersections globally. In developing countries like India, this type of intersection still faces a lot of issues, such as bottleneck situations, long queues and increased waiting times, due to increasing traffic which in turn affect the performance of the entire urban network. This research is a case study of a non-conventional roundabout, in terms of geometric design, in a small town in India. These types of roundabouts should be analyzed for their functionality in mixed traffic conditions, prevalent in many developing countries. Microscopic traffic simulation is an effective tool to analyze traffic conditions and estimate various measures of operational performance of intersections such as capacity, vehicle delay, queue length and Level of Service (LOS) of urban roadway network. This study involves analyzation of an unsymmetrical non-circular 6-legged roundabout known as “Kala Aam Chauraha” in a small town Bulandshahr in Uttar Pradesh, India using VISSIM simulation package which is the most widely used software for microscopic traffic simulation. For coding in VISSIM, data are collected from the site during morning and evening peak hours of a weekday and then analyzed for base model building. The model is calibrated on driving behavior and vehicle parameters and an optimal set of calibrated parameters is obtained followed by validation of the model to obtain the base model which can replicate the real field conditions. This calibrated and validated model is then used to analyze the prevailing operational traffic performance of the roundabout which is then compared with a proposed alternative to improve efficiency of roundabout network and to accommodate pedestrians in the geometry. The study results show that the alternative proposed is an advantage over the present roundabout as it considerably reduces congestion, vehicle delay and queue length and hence, successfully improves roundabout performance without compromising on pedestrian safety. The study proposes similar designs for modification of existing non-conventional roundabouts experiencing excessive delays and queues in order to improve their efficiency especially in the case of developing countries. From this study, it can be concluded that there is a need to improve the current geometry of such roundabouts to ensure better traffic performance and safety of drivers and pedestrians negotiating the intersection and hence this proposal may be considered as a best fit.

Keywords: Operational performance, roundabout, simulation, VISSIM, traffic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 783
373 LAYMOD; A Layered and Modular Platform for CAx Collaboration Management and Supporting Product data Integration based on STEP Standard

Authors: Omid F. Valilai, Mahmoud Houshmand

Abstract:

Nowadays companies strive to survive in a competitive global environment. To speed up product development/modifications, it is suggested to adopt a collaborative product development approach. However, despite the advantages of new IT improvements still many CAx systems work separately and locally. Collaborative design and manufacture requires a product information model that supports related CAx product data models. To solve this problem many solutions are proposed, which the most successful one is adopting the STEP standard as a product data model to develop a collaborative CAx platform. However, the improvement of the STEP-s Application Protocols (APs) over the time, huge number of STEP AP-s and cc-s, the high costs of implementation, costly process for conversion of older CAx software files to the STEP neutral file format; and lack of STEP knowledge, that usually slows down the implementation of the STEP standard in collaborative data exchange, management and integration should be considered. In this paper the requirements for a successful collaborative CAx system is discussed. The STEP standard capability for product data integration and its shortcomings as well as the dominant platforms for supporting CAx collaboration management and product data integration are reviewed. Finally a platform named LAYMOD to fulfil the requirements of CAx collaborative environment and integrating the product data is proposed. The platform is a layered platform to enable global collaboration among different CAx software packages/developers. It also adopts the STEP modular architecture and the XML data structures to enable collaboration between CAx software packages as well as overcoming the STEP standard limitations. The architecture and procedures of LAYMOD platform to manage collaboration and avoid contradicts in product data integration are introduced.

Keywords: CAx, Collaboration management, STEP applicationmodules, STEP standard, XML data structures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218
372 Attribution Theory and Perceived Reliability of Cellphones for Teaching and Learning

Authors: Mayowa A. Sofowora, Seraphim D. Eyono Obono

Abstract:

The use of information and communication technologies such as computers, mobile phones and the Internet is becoming prevalent in today’s world; and it is facilitating access to a vast amount of data, services and applications for the improvement of people’s lives. However, this prevalence of ICTs is hampered by the problem of low income levels in developing countries to the point where people cannot timeously replace or repair their ICT devices when damaged or lost; and this problem serves as a motivation for this study whose aim is to examine the perceptions of teachers on the reliability of cellphones when used for teaching and learning purposes. The research objectives unfolding this aim are of two types: Objectives on the selection and design of theories and models, and objectives on the empirical testing of these theories and models. The first type of objectives is achieved using content analysis in an extensive literature survey: and the second type of objectives is achieved through a survey of high school teachers from the ILembe and UMgungundlovu districts in the KwaZulu-Natal province of South Africa. Data collected from this questionnaire based survey is analysed in SPSS using descriptive statistics and Pearson correlations after checking the reliability and validity of the questionnaires. The main hypothesis driving this study is that there is a relationship between the demographics and the attribution identity of teachers on one hand, and their perceptions on the reliability of cellphones on the other hand, as suggested by existing literature; except that attribution identities are considered in this study under three angles: intention, knowledge and ability, and action. The results of this study confirm that the perceptions of teachers on the reliability of cellphones for teaching and learning are affected by the school location of these teachers, and by their perceptions on learners’ cellphones usage intentions and actual use.

Keywords: Attribution, Cellphones, E-learning, Reliability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
371 Evaluation of the Discoloration of Methyl Orange Using Black Sand as Semiconductor through Photocatalytic Oxidation and Reduction

Authors: P. Acosta-Santamaría, A. Ibatá-Soto, A. López-Vásquez

Abstract:

Organic compounds in wastewaters coming from textile and pharmaceutical industry generated multiple harmful effects on the environment and the human health. One of them is the methyl orange (MeO), an azoic dye considered to be a recalcitrant compound. The heterogeneous photocatalysis emerges as an alternative for treating this type of hazardous compounds, through the generation of OH radicals using radiation and a semiconductor oxide. According to the author’s knowledge, catalysts such as TiO2 doped with metals show high efficiency in degrading MeO; however, this presents economic limitations on industrial scale. Black sand can be considered as a naturally doped catalyst because in its structure is common to find compounds such as titanium, iron and aluminum oxides, also elements such as zircon, cadmium, manganese, etc. This study reports the photocatalytic activity of the mineral black sand used as semiconductor in the discoloration of MeO by oxidation and reduction photocatalytic techniques. For this, magnetic composites from the mineral were prepared (RM, M1, M2 and NM) and their activity were tested through MeO discoloration while TiO2 was used as reference. For the fractions, chemical, morphological and structural characterizations were performed using Scanning Electron Microscopy with Energy Dispersive X-Ray (SEM-EDX), X-Ray Diffraction (XRD) and X-Ray Fluorescence (XRF) analysis. M2 fraction showed higher MeO discoloration (93%) in oxidation conditions at pH 2 and it could be due to the presence of ferric oxides. However, the best result to reduction process was using M1 fraction (20%) at pH 2, which contains a higher titanium percentage. In the first process, hydrogen peroxide (H2O2) was used as electron donor agent. According to the results, black sand mineral can be used as natural semiconductor in photocatalytic process. It could be considered as a photocatalyst precursor in such processes, due to its low cost and easy access.

Keywords: Black sand mineral, methyl orange, oxidation, photocatalysis, reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1274
370 Evaluating the Validity of Computational Fluid Dynamics Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements

Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck

Abstract:

This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the mean geometric bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.

Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 379
369 Heuristics Analysis for Distributed Scheduling using MONARC Simulation Tool

Authors: Florin Pop

Abstract:

Simulation is a very powerful method used for highperformance and high-quality design in distributed system, and now maybe the only one, considering the heterogeneity, complexity and cost of distributed systems. In Grid environments, foe example, it is hard and even impossible to perform scheduler performance evaluation in a repeatable and controllable manner as resources and users are distributed across multiple organizations with their own policies. In addition, Grid test-beds are limited and creating an adequately-sized test-bed is expensive and time consuming. Scalability, reliability and fault-tolerance become important requirements for distributed systems in order to support distributed computation. A distributed system with such characteristics is called dependable. Large environments, like Cloud, offer unique advantages, such as low cost, dependability and satisfy QoS for all users. Resource management in large environments address performant scheduling algorithm guided by QoS constrains. This paper presents the performance evaluation of scheduling heuristics guided by different optimization criteria. The algorithms for distributed scheduling are analyzed in order to satisfy users constrains considering in the same time independent capabilities of resources. This analysis acts like a profiling step for algorithm calibration. The performance evaluation is based on simulation. The simulator is MONARC, a powerful tool for large scale distributed systems simulation. The novelty of this paper consists in synthetic analysis results that offer guidelines for scheduler service configuration and sustain the empirical-based decision. The results could be used in decisions regarding optimizations to existing Grid DAG Scheduling and for selecting the proper algorithm for DAG scheduling in various actual situations.

Keywords: Scheduling, Simulation, Performance Evaluation, QoS, Distributed Systems, MONARC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752
368 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks

Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang

Abstract:

The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.

Keywords: Femtocell networks, game theory, interference mitigation, spectrum allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 744
367 Computational Method for Annotation of Protein Sequence According to Gene Ontology Terms

Authors: Razib M. Othman, Safaai Deris, Rosli M. Illias

Abstract:

Annotation of a protein sequence is pivotal for the understanding of its function. Accuracy of manual annotation provided by curators is still questionable by having lesser evidence strength and yet a hard task and time consuming. A number of computational methods including tools have been developed to tackle this challenging task. However, they require high-cost hardware, are difficult to be setup by the bioscientists, or depend on time intensive and blind sequence similarity search like Basic Local Alignment Search Tool. This paper introduces a new method of assigning highly correlated Gene Ontology terms of annotated protein sequences to partially annotated or newly discovered protein sequences. This method is fully based on Gene Ontology data and annotations. Two problems had been identified to achieve this method. The first problem relates to splitting the single monolithic Gene Ontology RDF/XML file into a set of smaller files that can be easy to assess and process. Thus, these files can be enriched with protein sequences and Inferred from Electronic Annotation evidence associations. The second problem involves searching for a set of semantically similar Gene Ontology terms to a given query. The details of macro and micro problems involved and their solutions including objective of this study are described. This paper also describes the protein sequence annotation and the Gene Ontology. The methodology of this study and Gene Ontology based protein sequence annotation tool namely extended UTMGO is presented. Furthermore, its basic version which is a Gene Ontology browser that is based on semantic similarity search is also introduced.

Keywords: automatic clustering, bioinformatics tool, gene ontology, protein sequence annotation, semantic similarity search

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3129
366 Qualitative Analysis of Current Child Custody Evaluation Practices

Authors: Carolyn J. Ortega, Stephen E. Berger

Abstract:

The role of the custody evaluator is perhaps one of the most controversial and risky endeavors in clinical practice. Complaints filed with licensing boards regarding a child-custody evaluation constitute the second most common reason for such an event. Although the evaluator is expected to answer for the family-law court what is in the “best interest of the child,” there is a lack of clarity on how to establish this in any empirically validated manner. Hence, practitioners must contend with a nebulous framework in formulating their methodological procedures that inherently places them at risk in an already litigious context. This study sought to qualitatively investigate patterns of practice among doctoral practitioners conducting child custody evaluations in the area of Southern California. Ten psychologists were interviewed who devoted between 25 and 100% of their California private practice to custody work. All held Ph.D. degrees with a range of eight to 36 years of experience in custody work. Semi-structured interviews were used to investigate assessment practices, ensure adherence to guidelines, risk management, and qualities of evaluators. Forty-three Specific Themes were identified using Interpretive Phenomenological Analysis (IPA). Seven Higher Order Themes clustered on salient factors such as use of Ethics, Law, Guidelines; Parent Variables; Child Variables; Psychologist Variables; Testing; Literature; and Trends. Evaluators were aware of the ever-present reality of a licensure complaint and thus presented idiosyncratic descriptions of risk management considerations. Ambiguity about quantifying and validly tapping parenting abilities was also reviewed. Findings from this study suggested a high reliance on unstructured and observational methods in child custody practices.

Keywords: Forensic psychology, psychological testing, assessment methodology, child custody.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
365 Fundamental Natural Frequency of Chromite Composite Floor System

Authors: Farhad Abbas Gandomkar, Mona Danesh

Abstract:

This paper aims to determine Fundamental Natural Frequency (FNF) of a structural composite floor system known as Chromite. To achieve this purpose, FNFs of studied panels are determined by development of Finite Element Models (FEMs) in ABAQUS program. American Institute of Steel Construction (AISC) code in Steel Design Guide Series 11 presents a fundamental formula to calculate FNF of a steel framed floor system. This formula has been used to verify results of the FEMs. The variability in the FNF of the studied system under various parameters such as dimensions of floor, boundary conditions, rigidity of main and secondary beams around the floor, thickness of concrete slab, height of composite joists, distance between composite joists, thickness of top and bottom flanges of the open web steel joists, and adding tie beam perpendicular on the composite joists, is determined. The results show that changing in dimensions of the system, its boundary conditions, rigidity of main beam, and also adding tie beam, significant changes the FNF of the system up to 452.9%, 50.8%, - 52.2%, %52.6%, respectively. In addition, increasing thickness of concrete slab increases the FNF of the system up to 10.8%. Furthermore, the results demonstrate that variation in rigidity of secondary beam, height of composite joist, and distance between composite joists, and thickness of top and bottom flanges of open web steel joists insignificant changes the FNF of the studied system up to -0.02%, -3%, -6.1%, and 0.96%, respectively. Finally, the results of this study help designer predict occurrence of resonance, comfortableness, and design criteria of the studied system.

Keywords: Fundamental natural frequency, chromite composite floor system, finite element method, low and high frequency floors, comfortableness, resonance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127
364 Optimization of the Characteristic Straight Line Method by a “Best Estimate“ of Observed, Normal Orthometric Elevation Differences

Authors: Mahmoud M. S. Albattah

Abstract:

In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.

Keywords: Characteristic straight line method, dynamic height, landslides, orthometric height, systematic errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
363 Antioxidant Properties and Nutritive Values of Raw and Cooked Pool Barb (Puntius sophore) of Eastern Himalayas

Authors: Ch. Sarojnalini, Wahengbam Sarjubala Devi

Abstract:

Antioxidant properties and nutritive values of raw and cooked Pool barb, Puntius sophore (Hamilton-Buchanan) of Eastern Himalayas, India were determined. Antioxidant activity of the methanol extract of the raw, steamed, fried and curried Pool barb was evaluated by using 1,1-diphenyl-2-picrylhydrazyl (DPPH) scavenging assay. In DPPH scavenging assay the IC50 value of the raw, steamed, fried and curried Pool barb was 1.66 micro-gram/ml, 16.09 micro-gram/ml, 8.99 micro-gram/ml, 0.59 micro-gram/ml whereas the IC50 of the reference ascorbic acid was 46.66miro-gram/ml. These results showed that the fish have high antioxidant activity. Protein content was found highest in raw (20.50±0.08%) and lowest in curried (18.66±0.13%). Moisture content in raw, fried and curried was 76.35±0.09, 46.27±0.14 and 57.46±0.24 respectively. Lipid content was recorded 2.46±0.14% in raw and 21.76±0.10% in curried. Ash content varied from 12.57±0.11 to 22.53±0.07%. The total amino acids varied from 36.79±0.02 and 288.43±0.12 mg/100g. Eleven essential mineral elements were found abundant in all the samples. The samples had considerable amount of Fe ranging from 152.17 to 320.39 milli-gram/100gram, Ca 902.06 to 1356.02 milli-gram/100gram, Zn 91.07 to 138.14 milli-gram/100gram, K 193.25 to 261.56 milli-gram/100gram, Mg 225.06 to 229.10 milli-gram/100gram. Ni was not detected in the curried fish. The Mg and K contents were significantly decreased in frying method; however the Fe, Cu, Ca, Co and Mn contents were increased significantly in all the cooked samples. The Mg and Na contents were significantly increased in curried sample and the Cr content was decreased significantly (p<0.05) in all the cooked samples.

Keywords: Antioxidant property, Pool barb, minerals, amino acids, proximate composition, cooking methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2722
362 Use of Waste Tire Rubber Alkali-Activated-Based Mortars in Repair of Concrete Structures

Authors: Mohammad Ebrahim Kianifar, Ehsan Ahmadi

Abstract:

Reinforced concrete structures experience local defects such as cracks over their lifetime under various environmental loadings. Consequently, they are repaired by mortars to avoid detrimental effects such as corrosion of reinforcement, which in long-term may lead to strength loss of a member or collapse of structures. However, repaired structures may need multiple repairs due to changes in load distribution, and thus, lack of compatibility between mortar and substrate concrete. On the other hand, waste tire rubber alkali-activated (WTRAA)-based materials have very high potential to be used as repair mortars because of their ductility and flexibility, which may delay failure of repair mortar, and thus, provide sufficient compatibility. Hence, this work presents a study on suitability of WTRAA-based materials as mortars for repair of concrete structures through an experimental program. To this end, WTRAA mortars with 15% aggregate replacement, alkali-activated (AA) mortars, and ordinary mortars are made to repair a number of concrete beams. The WTRAA mortars are composed of slag as base material, sodium hydroxide as alkaline activator, and different gradation of waste tire rubber (fine and coarse gradations). Flexural tests are conducted on the concrete beams repaired by the ordinary, AA, and WTRAA mortars. It is found that, despite having lower compressive strength and modulus of elasticity, the WTRAA and AA mortars increase flexural strength of the repaired beams, give compatible failures, and provide sufficient mortar-concrete interface bondings. The ordinary mortars, however, show incompatible failure modes. This study demonstrates promising application of WTRAA mortars in practical repairs of concrete structures.

Keywords: Alkali-activated mortars, concrete repair, mortar compatibility flexural strength, waste tire rubber.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 454
361 Amelioration of Cardiac Arrythmias Classification Performance Using Artificial Neural Network, Adaptive Neuro-Fuzzy and Fuzzy Inference Systems Classifiers

Authors: Alexandre Boum, Salomon Madinatou

Abstract:

This paper aims at bringing a scientific contribution to the cardiac arrhythmia biomedical diagnosis systems; more precisely to the study of the amelioration of cardiac arrhythmia classification performance using artificial neural network, adaptive neuro-fuzzy and fuzzy inference systems classifiers. The purpose of this amelioration is to enable cardiologists to make reliable diagnosis through automatic cardiac arrhythmia analyzes and classifications based on high confidence classifiers. In this study, six classes of the most commonly encountered arrhythmias are considered: the Right Bundle Branch Block, the Left Bundle Branch Block, the Ventricular Extrasystole, the Auricular Extrasystole, the Atrial Fibrillation and the Normal Cardiac rate beat. From the electrocardiogram (ECG) extracted parameters, we constructed a matrix (360x360) serving as an input data sample for the classifiers based on neural networks and a matrix (1x6) for the classifier based on fuzzy logic. By varying three parameters (the quality of the neural network learning, the data size and the quality of the input parameters) the automatic classification permitted us to obtain the following performances: in terms of correct classification rate, 83.6% was obtained using the fuzzy logic based classifier, 99.7% using the neural network based classifier and 99.8% for the adaptive neuro-fuzzy based classifier. These results are based on signals containing at least 360 cardiac cycles. Based on the comparative analysis of the aforementioned three arrhythmia classifiers, the classifiers based on neural networks exhibit a better performance.

Keywords: Adaptive neuro-fuzzy, artificial neural network, cardiac arrythmias, fuzzy inference systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 711
360 The Flashnews as a Commercial Session of Political Marketing: The Content Analysis of the Embedded Political Narratives in Non-Political Media Products

Authors: Zsolt Szabolcsi

Abstract:

Political communication in Hungary has undergone a significant change in the 2010s. One element of the transformation is the Flashnews. This media product was launched in March 2015 and since then 40-50 blocks are broadcasted, daily, on 5 channels. Flashnews blocks are condensed news sessions, containing the summary of political narratives. It starts with the introduction of the narrator, then, usually four news topics are presented and, finally, the narrator concludes the block. The block lasts only one minute and, therefore, it provides a blink session into the main narratives of political communication at the time. Beyond its rapid pace, what makes its avoidance difficult is that these blocks are always in the first position in the commercial break of a non-political media product. Although it is only one minute long, its significance is high. The content of the Flashnews reflects the main governmental narratives and, therefore, the Flashnews is part of the agenda-setting capacity of political communication. It reaches media consumers who have limited knowledge and interest in politics, and their use of media products is not politically related. For this audience, the Flashnews pops up in the same way as commercials. Due to its structure and appearance, the impact of Flashnews seems to be similar to commercials, imbedded into the break of media products. It activates existing knowledge constructs, builds up associational links and maintains their presence in a way that the recipient is not aware of the phenomenon. The research aims to examine the extent to which the Flashnews and the main news narratives are identical in their content. This aim is realized with the content analysis of the two news products by examining the Flashnews and the evening news during main sport events from 2016 to 2018. The initial hypothesis of the research is that Flashnews is a contribution to the news management technique for an effective articulation of political narratives in public service media channels.

Keywords: Flashnews, political communication, political marketing, news management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 602
359 Machinability Analysis in Drilling Flax Fiber-Reinforced Polylactic Acid Bio-Composite Laminates

Authors: Amirhossein Lotfi, Huaizhong Li, Dzung Viet Dao

Abstract:

Interest in natural fiber-reinforced composites (NFRC) is progressively growing both in terms of academia research and industrial applications thanks to their abundant advantages such as low cost, biodegradability, eco-friendly nature and relatively good mechanical properties. However, their widespread use is still presumed as challenging because of the specificity of their non-homogeneous structure, limited knowledge on their machinability characteristics and parameter settings, to avoid defects associated with the machining process. The present work is aimed to investigate the effect of the cutting tool geometry and material on the drilling-induced delamination, thrust force and hole quality produced when drilling a fully biodegradable flax/poly (lactic acid) composite laminate. Three drills with different geometries and material were used at different drilling conditions to evaluate the machinability of the fabricated composites. The experimental results indicated that the choice of cutting tool, in terms of material and geometry, has a noticeable influence on the cutting thrust force and subsequently drilling-induced damages. The lower value of thrust force and better hole quality was observed using high-speed steel (HSS) drill, whereas Carbide drill (with point angle of 130o) resulted in the highest value of thrust force. Carbide drill presented higher wear resistance and stability in variation of thrust force with a number of holes drilled, while HSS drill showed the lower value of thrust force during the drilling process. Finally, within the selected cutting range, the delamination damage increased noticeably with feed rate and moderately with spindle speed.

Keywords: Natural fiber-reinforced composites, machinability, thrust force, delamination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 815