Search results for: Galerkin finite element method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9110

Search results for: Galerkin finite element method

1190 Performance Evaluation of Energy Efficient Communication Protocol for Mobile Ad Hoc Networks

Authors: Toshihiko Sasama, Kentaro Kishida, Kazunori Sugahara, Hiroshi Masuyama

Abstract:

A mobile ad hoc network is a network of mobile nodes without any notion of centralized administration. In such a network, each mobile node behaves not only as a host which runs applications but also as a router to forward packets on behalf of others. Clustering has been applied to routing protocols to achieve efficient communications. A CH network expresses the connected relationship among cluster-heads. This paper discusses the methods for constructing a CH network, and produces the following results: (1) The required running costs of 3 traditional methods for constructing a CH network are not so different from each other in the static circumstance, or in the dynamic circumstance. Their running costs in the static circumstance do not differ from their costs in the dynamic circumstance. Meanwhile, although the routing costs required for the above 3 methods are not so different in the static circumstance, the costs are considerably different from each other in the dynamic circumstance. Their routing costs in the static circumstance are also very different from their costs in the dynamic circumstance, and the former is one tenths of the latter. The routing cost in the dynamic circumstance is mostly the cost for re-routing. (2) On the strength of the above results, we discuss new 2 methods regarding whether they are tolerable or not in the dynamic circumstance, that is, whether the times of re-routing are small or not. These new methods are revised methods that are based on the traditional methods. We recommended the method which produces the smallest routing cost in the dynamic circumstance, therefore producing the smallest total cost.

Keywords: cluster, mobile ad hoc network, re-routing cost, simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1350
1189 Network Reconfiguration for Load Balancing in Distribution System with Distributed Generation and Capacitor Placement

Authors: T. Lantharthong, N. Rugthaicharoencheep

Abstract:

This paper presents an efficient algorithm for optimization of radial distribution systems by a network reconfiguration to balance feeder loads and eliminate overload conditions. The system load-balancing index is used to determine the loading conditions of the system and maximum system loading capacity. The index value has to be minimum in the optimal network reconfiguration of load balancing. A method based on Tabu search algorithm, The Tabu search algorithm is employed to search for the optimal network reconfiguration. The basic idea behind the search is a move from a current solution to its neighborhood by effectively utilizing a memory to provide an efficient search for optimality. It presents low computational effort and is able to find good quality configurations. Simulation results for a radial 69-bus system with distributed generations and capacitors placement. The study results show that the optimal on/off patterns of the switches can be identified to give the best network reconfiguration involving balancing of feeder loads while respecting all the constraints.

Keywords: Network reconfiguration, Distributed generation Capacitor placement, Load balancing, Optimization technique

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4219
1188 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model

Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis

Abstract:

In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.

Keywords: Expectation-maximization (EM) algorithm, cause of failure, intensity, linear degradation path, masked data, reliability function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073
1187 Capacity of Anchors in Structural Connections

Authors: T. Cornelius, G. Secilmis

Abstract:

When dealing with safety in structures, the connections between structural components play an important role. Robustness of a structure as a whole depends both on the load- bearing capacity of the structural component and on the structures capacity to resist total failure, even though a local failure occurs in a component or a connection between components. To avoid progressive collapse it is necessary to be able to carry out a design for connections. A connection may be executed with anchors to withstand local failure of the connection in structures built with prefabricated components. For the design of these anchors, a model is developed for connections in structures performed in prefabricated autoclaved aerated concrete components. The design model takes into account the effect of anchors placed close to the edge, which may result in splitting failure. Further the model is developed to consider the effect of reinforcement diameter and anchor depth. The model is analytical and theoretically derived assuming a static equilibrium stress distribution along the anchor. The theory is compared to laboratory test, including the relevant parameters and the model is refined and theoretically argued analyzing the observed test results. The method presented can be used to improve safety in structures or even optimize the design of the connections

Keywords: Robustness, anchors, connections, aircrete, prefabricated components.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2023
1186 Improved Modulo 2n +1 Adder Design

Authors: Somayeh Timarchi, Keivan Navi

Abstract:

Efficient modulo 2n+1 adders are important for several applications including residue number system, digital signal processors and cryptography algorithms. In this paper we present a novel modulo 2n+1 addition algorithm for a recently represented number system. The proposed approach is introduced for the reduction of the power dissipated. In a conventional modulo 2n+1 adder, all operands have (n+1)-bit length. To avoid using (n+1)-bit circuits, the diminished-1 and carry save diminished-1 number systems can be effectively used in applications. In the paper, we also derive two new architectures for designing modulo 2n+1 adder, based on n-bit ripple-carry adder. The first architecture is a faster design whereas the second one uses less hardware. In the proposed method, the special treatment required for zero operands in Diminished-1 number system is removed. In the fastest modulo 2n+1 adders in normal binary system, there are 3-operand adders. This problem is also resolved in this paper. The proposed architectures are compared with some efficient adders based on ripple-carry adder and highspeed adder. It is shown that the hardware overhead and power consumption will be reduced. As well as power reduction, in some cases, power-delay product will be also reduced.

Keywords: Modulo 2n+1 arithmetic, residue number system, low power, ripple-carry adders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2905
1185 Design of Parity-Preserving Reversible Logic Signed Array Multipliers

Authors: Mojtaba Valinataj

Abstract:

Reversible logic as a new favorable design domain can be used for various fields especially creating quantum computers because of its speed and intangible power consumption. However, its susceptibility to a variety of environmental effects may lead to yield the incorrect results. In this paper, because of the importance of multiplication operation in various computing systems, some novel reversible logic array multipliers are proposed with error detection capability by incorporating the parity-preserving gates. The new designs are presented for two main parts of array multipliers, partial product generation and multi-operand addition, by exploiting the new arrangements of existing gates, which results in two signed parity-preserving array multipliers. The experimental results reveal that the best proposed 4×4 multiplier in this paper reaches 12%, 24%, and 26% enhancements in the number of constant inputs, number of required gates, and quantum cost, respectively, compared to previous design. Moreover, the best proposed design is generalized for n×n multipliers with general formulations to estimate the main reversible logic criteria as the functions of the multiplier size.

Keywords: Array multipliers, Baugh-Wooley method, error detection, parity-preserving gates, quantum computers, reversible logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1026
1184 Anticancer Effect of Doxorubicin Loaded Heparin based Super-paramagnetic Iron oxide Nanoparticles against the Human Ovarian Cancer Cells

Authors: Amaneh Javid, Shahin Ahmadian, Ali A. Saboury, Saeed Rezaei-Zarchi

Abstract:

This study determines the effect of naked and heparinbased super-paramagnetic iron oxide nanoparticles on the human cancer cell lines of A2780. Doxorubicin was used as the anticancer drug, entrapped in the SPIO-NPs. This study aimed to decorate nanoparticles with heparin, a molecular ligand for 'active' targeting of cancerous cells and the application of modified-nanoparticles in cancer treatment. The nanoparticles containing the anticancer drug DOX were prepared by a solvent evaporation and emulsification cross-linking method. The physicochemical properties of the nanoparticles were characterized by various techniques, and uniform nanoparticles with an average particle size of 110±15 nm with high encapsulation efficiencies (EE) were obtained. Additionally, a sustained release of DOX from the SPIO-NPs was successful. Cytotoxicity tests showed that the SPIO-DOX-HP had higher cell toxicity than the individual HP and confocal microscopy analysis confirmed excellent cellular uptake efficiency. These results indicate that HP based SPIO-NPs have potential uses as anticancer drug carriers and also have an enhanced anticancer effect.

Keywords: Heparin, A2780 cells, ovarian cancer, nanoparticles, doxorubicin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2465
1183 From Vertigo to Verticality: An Example of Phenomenological Design in Architecture

Authors: E. Osorio Schmied

Abstract:

Architects commonly attempt a depiction of organic forms when their works are inspired by nature, regardless of the building site. Nevertheless it is also possible to try matching structures with natural scenery, by applying a phenomenological approach in terms of spatial operations, regarding perceptions from nature through architectural aspects such as protection, views, and orientation. This method acknowledges a relationship between place and space, where intentions towards tangible facts then become design statements. Although spaces resulting from such a process may present an effective response to the environment, they can also offer further outcomes beyond the realm of form. The hypothesis is that, in addition to recognising a bond between architecture and nature, it is also plausible to associate such perceptions with the inner ambient of buildings, by analysing features such as daylight. The case study of a single-family house in a rainforest near Valdivia, Chilean Patagonia is presented, with the intention of addressing the above notions through a discussion of the actual effects of inhabiting a place by way of a series of insights, including a revision of diagrams and photographs that assist in understanding the implications of this design practice. In addition, figures based on post-occupancy behaviour and daylighting performance relate both architectural and environmental issues to a decision-making process motivated by the observation of nature.

Keywords: Architecture, design statements, nature, perception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1078
1182 Effect of Fractional Flow Curves on the Heavy Oil and Light Oil Recoveries in Petroleum Reservoirs

Authors: Abdul Jamil Nazari, Shigeo Honma

Abstract:

This paper evaluates and compares the effect of fractional flow curves on the heavy oil and light oil recoveries in a petroleum reservoir. Fingering of flowing water is one of the serious problems of the oil displacement by water and another problem is the estimation of the amount of recover oil from a petroleum reservoir. To address these problems, the fractional flow of heavy oil and light oil are investigated. The fractional flow approach treats the multi-phases flow rate as a total mixed fluid and then describes the individual phases as fractional of the total flow. Laboratory experiments are implemented for two different types of oils, heavy oil, and light oil, to experimentally obtain relative permeability and fractional flow curves. Application of the light oil fractional curve, which exhibits a regular S-shape, to the water flooding method showed that a large amount of mobile oil in the reservoir is displaced by water injection. In contrast, the fractional flow curve of heavy oil does not display an S-shape because of its high viscosity. Although the advance of the injected waterfront is faster than in light oil reservoirs, a significant amount of mobile oil remains behind the waterfront.

Keywords: Fractional flow curve, oil recovery, relative permeability, water fingering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1473
1181 Study of Chest Pain and its Risk Factors in Over 30 Year-Old Individuals

Authors: S. Dabiran

Abstract:

Chest pain is one of the most prevalent complaints among adults that cause the people to attend to medical centers. The aim was to determine the prevalence and risk factors of chest pain among over 30 years old people in Tehran. In this cross-sectional study, 787 adults took part from Apr 2005 until Apr 2006. The sampling method was random cluster sampling and there were 25 clusters. In each cluster, interviews were performed with 32 over 30 years old, people lived in those houses. In cases with chest pain, extra questions asked. The prevalence of CP was 9% (71 cases). Of them 21 cases (6.5%) were in 41-60 year age ranges and the remainders were over 61 year old. 19 cases (26.8%) mentioned CP in resting state and all of the cases had exertion onset CP. The CP duration was 10 minutes or less in all of the cases and in most of them (84.5%), the location of pain mentioned left anterior part of chest, left anterior part of sternum and or left arm. There was positive history of myocardial infarction in 12 cases (17%). There was significant relation between CP and age, sex and between history of myocardial infarction and marital state of study people. Our results are similar to other studies- results in most parts, however it is necessary to perform supplementary tests and follow up studies to differentiate between cardiac and non-cardiac CP exactly.

Keywords: Chest pain, myocardial infarction, risk factor, prevalence

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
1180 Survey of Access Controls in Cloud Computing

Authors: Monirah Alkathiry, Hanan Aljarwan

Abstract:

Cloud computing is one of the most significant technologies that the world deals with, in different sectors with different purposes and capabilities. The cloud faces various challenges in securing data from unauthorized access or modification. Consequently, security risks and levels have greatly increased. Therefore, cloud service providers (CSPs) and users need secure mechanisms that ensure that data are kept secret and safe from any disclosures or exploits. For this reason, CSPs need a number of techniques and technologies to manage and secure access to the cloud services to achieve security goals, such as confidentiality, integrity, identity access management (IAM), etc. Therefore, this paper will review and explore various access controls implemented in a cloud environment that achieve different security purposes. The methodology followed in this survey was conducting an assessment, evaluation, and comparison between those access controls mechanisms and technologies based on different factors, such as the security goals it achieves, usability, and cost-effectiveness. This assessment resulted in the fact that the technology used in an access control affects the security goals it achieves as well as there is no one access control method that achieves all security goals. Consequently, such a comparison would help decision-makers to choose properly the access controls that meet their requirements.

Keywords: Access controls, cloud computing, confidentiality, identity and access management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
1179 On the Network Packet Loss Tolerance of SVM Based Activity Recognition

Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir

Abstract:

In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces  high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.

Keywords: Activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2871
1178 Determining the Width and Depths of Cut in Milling on the Basis of a Multi-Dexel Model

Authors: Jens Friedrich, Matthias A. Gebele, Armin Lechler, Alexander Verl

Abstract:

Chatter vibrations and process instabilities are the most important factors limiting the productivity of the milling process. Chatter can leads to damage of the tool, the part or the machine tool. Therefore, the estimation and prediction of the process stability is very important. The process stability depends on the spindle speed, the depth of cut and the width of cut. In milling, the process conditions are defined in the NC-program. While the spindle speed is directly coded in the NC-program, the depth and width of cut are unknown. This paper presents a new simulation based approach for the prediction of the depth and width of cut of a milling process. The prediction is based on a material removal simulation with an analytically represented tool shape and a multi-dexel approach for the workpiece. The new calculation method allows the direct estimation of the depth and width of cut, which are the influencing parameters of the process stability, instead of the removed volume as existing approaches do. The knowledge can be used to predict the stability of new, unknown parts. Moreover with an additional vibration sensor, the stability lobe diagram of a milling process can be estimated and improved based on the estimated depth and width of cut.

Keywords: Dexel, process stability, material removal, milling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2261
1177 Sensitivity Comparison between Rapid Immuno-Chromatographic Device Test and ELISA in Detection and Sero-Prevalence of HBsAg and Anti-HCV antibodies in Apparently Healthy Blood Donors of Lahore, Pakistan

Authors: Natasha Hussain, Maleeha Aslam, Robina Farooq

Abstract:

Hepatitis B and hepatitis C are among the most significant hepatic infections all around the world that may lead to hepatocellular carcinoma. This study is first time performed at the blood transfussion centre of Omar hospital, Lahore. It aims to determine the sero-prevalence of these diseases by screening the apparently healthy blood donors who might be the carriers of HBV or HCV and pose a high risk in the transmission. It also aims the comparison between the sensitivity of two diagnostic tests; chromatographic immunoassay – one step test device and Enzyme Linked Immuno Sorbant Assay (ELISA). Blood serum of 855 apparently healthy blood donors was screened for Hepatitis B surface antigen (HBsAg) and for anti HCV antibodies. SPSS version 12.0 and X2 (Chi-square) test were used for statistical analysis. The seroprevalence of HCV was 8.07% by the device method and by ELISA 9.12% and that of HBV was 5.6% by the device and 6.43% by ELISA. The unavailability of vaccination against HCV makes it more prevalent. Comparing the two diagnostic methods, ELISA proved to be more sensitive.

Keywords: ELISA, Sensitivity comparison of diagnostic tests, seroprevalence of Hepatitis B and C

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2198
1176 Uncertainty Propagation and Sensitivity Analysis During Calibration of an Integrated Land Use and Transport Model

Authors: Parikshit Dutta, Mathieu Saujot, Elise Arnaud, Benoit Lefevre, Emmanuel Prados

Abstract:

In this work, propagation of uncertainty during calibration process of TRANUS, an integrated land use and transport model (ILUTM), has been investigated. It has also been examined, through a sensitivity analysis, which input parameters affect the variation of the outputs the most. Moreover, a probabilistic verification methodology of calibration process, which equates the observed and calculated production, has been proposed. The model chosen as an application is the model of the city of Grenoble, France. For sensitivity analysis and uncertainty propagation, Monte Carlo method was employed, and a statistical hypothesis test was used for verification. The parameters of the induced demand function in TRANUS, were assumed as uncertain in the present case. It was found that, if during calibration, TRANUS converges, then with a high probability the calibration process is verified. Moreover, a weak correlation was found between the inputs and the outputs of the calibration process. The total effect of the inputs on outputs was investigated, and the output variation was found to be dictated by only a few input parameters.

Keywords: Uncertainty propagation, sensitivity analysis, calibration under uncertainty, hypothesis testing, integrated land use and transport models, TRANUS, Grenoble.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521
1175 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems

Authors: Jianhua Zhou, Yuwen Zhang

Abstract:

A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.

Keywords: Conduction, inverse problems, conjugated gradient method, laser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 844
1174 Study on Phytochemical Properties, Antibacterial Activity and Cytotoxicity of Aloe vera L.

Authors: K. Thu, Yin Y. Mon, Tin A. Khaing, Ohn M. Tun

Abstract:

The aim of the study was to investigate phytochemical properties, antimicrobial activity and cytotoxicity of Aloe vera. The phytochemical screening of the extracts of leaves of A. vera revealed the presence of bioactive compounds such as alkaloids, tannins, flavonoids phenolic compounds, and etc. with absence of cyanogenic glycosides. Three different solvents such as methanol, ethanol and Di-Methyl sulfoxide were used to screen the antimicrobial activity of A. vera leaves against four human clinical pathogens by agar well diffusion method. The maximum antibacterial activities were observed in methanol extract followed by ethanol and Di-Methyl sulfoxide. It was also found that remarkable antibacterial activities with methanolic and ethanolic extracts of A. vera compared with the standard antibiotic, tetracycline that was not active against E. coli and S. boydii and supported the view that A. vera is a potent antimicrobial agent compared with the conventional antibiotic. Moreover, the brine shrimps (Artemia salina) toxicity test exhibited LC50 value was 569.52 ppm. The resulting data indicated that the A. vera plant have less toxic effects on brine shrimp. Hence, it is signified that Aloe vera plant extract is safe to be used as an antimicrobial agent.

Keywords: Aloe vera L., antimicrobial activity, brine shrimp, cytotoxicity, phytochemical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6051
1173 Clustering Mixed Data Using Non-normal Regression Tree for Process Monitoring

Authors: Youngji Yoo, Cheong-Sool Park, Jun Seok Kim, Young-Hak Lee, Sung-Shick Kim, Jun-Geol Baek

Abstract:

In the semiconductor manufacturing process, large amounts of data are collected from various sensors of multiple facilities. The collected data from sensors have several different characteristics due to variables such as types of products, former processes and recipes. In general, Statistical Quality Control (SQC) methods assume the normality of the data to detect out-of-control states of processes. Although the collected data have different characteristics, using the data as inputs of SQC will increase variations of data, require wide control limits, and decrease performance to detect outof- control. Therefore, it is necessary to separate similar data groups from mixed data for more accurate process control. In the paper, we propose a regression tree using split algorithm based on Pearson distribution to handle non-normal distribution in parametric method. The regression tree finds similar properties of data from different variables. The experiments using real semiconductor manufacturing process data show improved performance in fault detecting ability.

Keywords: Semiconductor, non-normal mixed process data, clustering, Statistical Quality Control (SQC), regression tree, Pearson distribution system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780
1172 An Intelligent Nondestructive Testing System of Ultrasonic Infrared Thermal Imaging Based on Embedded Linux

Authors: Hao Mi, Ming Yang, Tian-yue Yang

Abstract:

Ultrasonic infrared nondestructive testing is a kind of testing method with high speed, accuracy and localization. However, there are still some problems, such as the detection requires manual real-time field judgment, the methods of result storage and viewing are still primitive. An intelligent non-destructive detection system based on embedded linux is put forward in this paper. The hardware part of the detection system is based on the ARM (Advanced Reduced Instruction Set Computer Machine) core and an embedded linux system is built to realize image processing and defect detection of thermal images. The CLAHE algorithm and the Butterworth filter are used to process the thermal image, and then the boa server and CGI (Common Gateway Interface) technology are used to transmit the test results to the display terminal through the network for real-time monitoring and remote monitoring. The system also liberates labor and eliminates the obstacle of manual judgment. According to the experiment result, the system provides a convenient and quick solution for industrial non-destructive testing.

Keywords: Remote monitoring, non-destructive testing, embedded linux system, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 967
1171 Objectivity, Reliability and Validity of the 90º Push-Ups Test Protocol Among Male and Female Students of Sports Science Program

Authors: Ahmad Hashim, Mohd Sani Madon

Abstract:

This study was conducted to determine the objectivity, reliability and validity of the 90º push-ups test protocol among male and female students of Sports Science Program, Faculty of Sports Science and Coaching Sultan Idris University of Education. Samples (n = 300), consisted of males (n = 168) and females (n = 132) students were randomly selected for this study. Researchers tested the 90º push-ups on the sample twice in a single trial, test and re-test protocol in the bench press test. Pearson-Product Moment Correlation method's was used to determine the value of objectivity, reliability and validity testing. The findings showed that the 900 pushups test protocol showed high consistency between the two testers with a value of r = .99. Likewise, The reliability value between test and re-test for the 90º push-ups test for the male (r=.93) and female (r=.93) students was also high. The results showed a correlation between 90º push-ups test and bench press test for boys was r = .64 and girls was r = .28. This finding indicates that the use of the 90º push-ups to test muscular strength and endurance in the upper body of males has a higher validity values than female students.

Keywords: Arm and shoulder girdle strength and endurance, 900 push-ups, bench press

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9216
1170 Optimizing Spatial Trend Detection By Artificial Immune Systems

Authors: M. Derakhshanfar, B. Minaei-Bidgoli

Abstract:

Spatial trends are one of the valuable patterns in geo databases. They play an important role in data analysis and knowledge discovery from spatial data. A spatial trend is a regular change of one or more non spatial attributes when spatially moving away from a start object. Spatial trend detection is a graph search problem therefore heuristic methods can be good solution. Artificial immune system (AIS) is a special method for searching and optimizing. AIS is a novel evolutionary paradigm inspired by the biological immune system. The models based on immune system principles, such as the clonal selection theory, the immune network model or the negative selection algorithm, have been finding increasing applications in fields of science and engineering. In this paper, we develop a novel immunological algorithm based on clonal selection algorithm (CSA) for spatial trend detection. We are created neighborhood graph and neighborhood path, then select spatial trends that their affinity is high for antibody. In an evolutionary process with artificial immune algorithm, affinity of low trends is increased with mutation until stop condition is satisfied.

Keywords: Spatial Data Mining, Spatial Trend Detection, Heuristic Methods, Artificial Immune System, Clonal Selection Algorithm (CSA)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2046
1169 Simplified 3R2C Building Thermal Network Model: A Case Study

Authors: S. M. Mahbobur Rahman

Abstract:

Whole building energy simulation models are widely used for predicting future energy consumption, performance diagnosis and optimum control.  Black box building energy modeling approach has been heavily studied in the past decade. The thermal response of a building can also be modeled using a network of interconnected resistors (R) and capacitors (C) at each node called R-C network. In this study, a model building, Case 600, as described in the “Standard Method of Test for the Evaluation of Building Energy Analysis Computer Program”, ASHRAE standard 140, is studied along with a 3R2C thermal network model and the ASHRAE clear sky solar radiation model. Although building an energy model involves two important parts of building component i.e., the envelope and internal mass, the effect of building internal mass is not considered in this study. All the characteristic parameters of the building envelope are evaluated as on Case 600. Finally, monthly building energy consumption from the thermal network model is compared with a simple-box energy model within reasonable accuracy. From the results, 0.6-9.4% variation of monthly energy consumption is observed because of the south-facing windows.

Keywords: ASHRAE case study, clear sky solar radiation model, energy modeling, thermal network model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1232
1168 An Implementation of MacMahon's Partition Analysis in Ordering the Lower Bound of Processing Elements for the Algorithm of LU Decomposition

Authors: Halil Snopce, Ilir Spahiu, Lavdrim Elmazi

Abstract:

A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.

Keywords: generating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equations and calculus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475
1167 Fast Adjustable Threshold for Uniform Neural Network Quantization

Authors: Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev

Abstract:

The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with quantization procedure by introducing the trained scale factors for discretization thresholds that are separate for each filter. Using the proposed technique, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available in the GitHub repository.

Keywords: Distillation, machine learning, neural networks, quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 732
1166 Person-Environment Fit (PE Fit): Evidence from Brazil

Authors: Jucelia Appio, Danielle Deimling De Carli, Bruno Henrique Rocha Fernandes, Nelson Natalino Frizon

Abstract:

The purpose of this paper is to investigate if there are positive and significant correlations between the dimensions of Person-Environment Fit (Person-Job, Person-Organization, Person-Group and Person-Supervisor) at the “Best Companies to Work for” in Brazil in 2017. For that, a quantitative approach was used with a descriptive method being defined as a research sample the "150 Best Companies to Work for", according to data base collected in 2017 and provided by Fundação Instituto of Administração (FIA) of the University of São Paulo (USP). About the data analysis procedures, asymmetry and kurtosis, factorial analysis, Kaiser-Meyer-Olkin (KMO) tests, Bartlett sphericity and Cronbach's alpha were used for the 69 research variables, and as a statistical technique for the purpose of analyzing the hypothesis, Pearson's correlation analysis was performed. As a main result, we highlight that there was a positive and significant correlation between the dimensions of Person-Environment Fit, corroborating the H1 hypothesis that there is a positive and significant correlation between Person-Job Fit, Person-Organization Fit, Person-Group Fit and Person-Supervisor Fit.

Keywords: Human resource management, person-environment fit, strategic people management, best companies to work for.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 998
1165 Reliability Evaluation of Composite Electric Power System Based On Latin Hypercube Sampling

Authors: R. Ashok Bakkiyaraj, N. Kumarappan

Abstract:

This paper investigates the suitability of Latin Hypercube sampling (LHS) for composite electric power system reliability analysis. Each sample generated in LHS is mapped into an equivalent system state and used for evaluating the annualized system and load point indices. DC loadflow based state evaluation model is solved for each sampled contingency state. The indices evaluated are loss of load probability, loss of load expectation, expected demand not served and expected energy not supplied. The application of the LHS is illustrated through case studies carried out using RBTS and IEEE-RTS test systems. Results obtained are compared with non-sequential Monte Carlo simulation and state enumeration analytical approaches. An error analysis is also carried out to check the LHS method’s ability to capture the distributions of the reliability indices. It is found that LHS approach estimates indices nearer to actual value and gives tighter bounds of indices than non-sequential Monte Carlo simulation.

Keywords: Composite power system, Latin Hypercube sampling, Monte Carlo simulation, Reliability evaluation, Variance analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3109
1164 New Features for Specific JPEG Steganalysis

Authors: Johann Barbier, Eric Filiol, Kichenakoumar Mayoura

Abstract:

We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.

Keywords: Compressed frequency domain, Fisher discriminant, specific JPEG steganalysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2162
1163 Motor Imagery Signal Classification for a Four State Brain Machine Interface

Authors: Hema C. R., Paulraj M. P., S. Yaacob, A. H. Adom, R. Nagarajan

Abstract:

Motor imagery classification provides an important basis for designing Brain Machine Interfaces [BMI]. A BMI captures and decodes brain EEG signals and transforms human thought into actions. The ability of an individual to control his EEG through imaginary mental tasks enables him to control devices through the BMI. This paper presents a method to design a four state BMI using EEG signals recorded from the C3 and C4 locations. Principle features extracted through principle component analysis of the segmented EEG are analyzed using two novel classification algorithms using Elman recurrent neural network and functional link neural network. Performance of both classifiers is evaluated using a particle swarm optimization training algorithm; results are also compared with the conventional back propagation training algorithm. EEG motor imagery recorded from two subjects is used in the offline analysis. From overall classification performance it is observed that the BP algorithm has higher average classification of 93.5%, while the PSO algorithm has better training time and maximum classification. The proposed methods promises to provide a useful alternative general procedure for motor imagery classification

Keywords: Motor Imagery, Brain Machine Interfaces, Neural Networks, Particle Swarm Optimization, EEG signal processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2456
1162 Arrival and Departure Scheduling at Hub Airports Considering Airlines Level

Authors: A. Nourmohammadzadeh, R. Tavakkoli- Moghaddam

Abstract:

As the air traffic increases at a hub airport, some flights cannot land or depart at their preferred target time. This event happens because the airport runways become occupied to near their capacity. It results in extra costs for both passengers and airlines because of the loss of connecting flights or more waiting, more fuel consumption, rescheduling crew members, etc. Hence, devising an appropriate scheduling method that determines a suitable runway and time for each flight in order to efficiently use the hub capacity and minimize the related costs is of great importance. In this paper, we present a mixed-integer zero-one model for scheduling a set of mixed landing and departing flights (despite of most previous studies considered only landings). According to the fact that the flight cost is strongly affected by the level of airline, we consider different airline categories in our model. This model presents a single objective minimizing the total sum of three terms, namely 1) the weighted deviation from targets, 2) the scheduled time of the last flight (i.e., makespan), and 3) the unbalancing the workload on runways. We solve 10 simulated instances of different sizes up to 30 flights and 4 runways. Optimal solutions are obtained in a reasonable time, which are satisfactory in comparison with the traditional rule, namely First- Come-First-Serve (FCFS) that is far apart from optimality in most cases.

Keywords: Arrival and departure scheduling, Airline level, Mixed-integer model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1829
1161 In vitro Cytotoxic and Genotoxic Effects of Arsenic Trioxide on Human Keratinocytes

Authors: H. Bouaziz, M. Sefi, J. de Lapuente, M. Borras, N. Zeghal

Abstract:

Although, arsenic trioxide has been the subject of toxicological research, in vitro cytotoxicity and genotoxicity studies using relevant cell models and uniform methodology are not well elucidated. Hence, the aim of the present study was to evaluate the cytotoxicity and genotoxicity induced by arsenic trioxide in human keratinocytes (HaCaT) using the MTT [3-(4, 5-dimethylthiazol-2-yl)- 2,5-diphenyltetrazolium bromide] and alkaline single cell gel electrophoresis (Comet) assays, respectively. Human keratinocytes were treated with different doses of arsenic trioxide for 4 h prior to cytogenetic assessment. Data obtained from the MTT assay indicated that arsenic trioxide significantly reduced the viability of HaCaT cells in a dose-dependent manner, showing an IC50 value of 34.18 ± 0.6 μM. Data generated from the comet assay also indicated a significant dose-dependent increase in DNA damage in HaCaT cells associated with arsenic trioxide exposure. We observed a significant increase in comet tail length and tail moment, showing an evidence of arsenic trioxide -induced genotoxic damage in HaCaT cells. This study confirms that the comet assay is a sensitive and effective method to detect DNA damage caused by arsenic.

Keywords: Arsenic trioxide, cytotoxixity, genotoxicity, HaCaT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2228