Search results for: lattice boltzmann method
15415 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations
Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra
Abstract:
The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.Keywords: electromyography, EMG normalization, functional EMG, older adults
Procedia PDF Downloads 9315414 Non-Local Simultaneous Sparse Unmixing for Hyperspectral Data
Authors: Fanqiang Kong, Chending Bian
Abstract:
Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed pixels of a hyperspectral image can be expressed in the form of linear combination of only a few pure spectral signatures (end members) in an available spectral library. However, the sparse unmixing problem still remains a great challenge at finding the optimal subset of endmembers for the observed data from a large standard spectral library, without considering the spatial information. Under such circumstances, a sparse unmixing algorithm termed as non-local simultaneous sparse unmixing (NLSSU) is presented. In NLSSU, the non-local simultaneous sparse representation method for endmember selection of sparse unmixing, is used to finding the optimal subset of endmembers for the similar image patch set in the hyperspectral image. And then, the non-local means method, as a regularizer for abundance estimation of sparse unmixing, is used to exploit the abundance image non-local self-similarity. Experimental results on both simulated and real data demonstrate that NLSSU outperforms the other algorithms, with a better spectral unmixing accuracy.Keywords: hyperspectral unmixing, simultaneous sparse representation, sparse regression, non-local means
Procedia PDF Downloads 25015413 A Pedagogical Case Study on Consumer Decision Making Models: A Selection of Smart Phone Apps
Authors: Yong Bum Shin
Abstract:
This case focuses on Weighted additive difference, Conjunctive, Disjunctive, and Elimination by aspects methodologies in consumer decision-making models and the Simple additive weighting (SAW) approach in the multi-criteria decision-making (MCDM) area. Most decision-making models illustrate that the rank reversal phenomenon is unpreventable. This paper presents that rank reversal occurs in popular managerial methods such as Weighted Additive Difference (WAD), Conjunctive Method, Disjunctive Method, Elimination by Aspects (EBA) and MCDM methods as well as such as the Simple Additive Weighting (SAW) and finally Unified Commensurate Multiple (UCM) models which successfully addresses these rank reversal problems in most popular MCDM methods in decision-making area.Keywords: multiple criteria decision making, rank inconsistency, unified commensurate multiple, analytic hierarchy process
Procedia PDF Downloads 8415412 Specific Emitter Identification Based on Refined Composite Multiscale Dispersion Entropy
Authors: Shaoying Guo, Yanyun Xu, Meng Zhang, Weiqing Huang
Abstract:
The wireless communication network is developing rapidly, thus the wireless security becomes more and more important. Specific emitter identification (SEI) is an vital part of wireless communication security as a technique to identify the unique transmitters. In this paper, a SEI method based on multiscale dispersion entropy (MDE) and refined composite multiscale dispersion entropy (RCMDE) is proposed. The algorithms of MDE and RCMDE are used to extract features for identification of five wireless devices and cross-validation support vector machine (CV-SVM) is used as the classifier. The experimental results show that the total identification accuracy is 99.3%, even at low signal-to-noise ratio(SNR) of 5dB, which proves that MDE and RCMDE can describe the communication signal series well. In addition, compared with other methods, the proposed method is effective and provides better accuracy and stability for SEI.Keywords: cross-validation support vector machine, refined com- posite multiscale dispersion entropy, specific emitter identification, transient signal, wireless communication device
Procedia PDF Downloads 13115411 Application of the Concept of Comonotonicity in Option Pricing
Authors: A. Chateauneuf, M. Mostoufi, D. Vyncke
Abstract:
Monte Carlo (MC) simulation is a technique that provides approximate solutions to a broad range of mathematical problems. A drawback of the method is its high computational cost, especially in a high-dimensional setting, such as estimating the Tail Value-at-Risk for large portfolios or pricing basket options and Asian options. For these types of problems, one can construct an upper bound in the convex order by replacing the copula by the comonotonic copula. This comonotonic upper bound can be computed very quickly, but it gives only a rough approximation. In this paper we introduce the Comonotonic Monte Carlo (CoMC) simulation, by using the comonotonic approximation as a control variate. The CoMC is of broad applicability and numerical results show a remarkable speed improvement. We illustrate the method for estimating Tail Value-at-Risk and pricing basket options and Asian options when the logreturns follow a Black-Scholes model or a variance gamma model.Keywords: control variate Monte Carlo, comonotonicity, option pricing, scientific computing
Procedia PDF Downloads 51715410 Application of Additive Manufacturing for Production of Optimum Topologies
Authors: Mahdi Mottahedi, Peter Zahn, Armin Lechler, Alexander Verl
Abstract:
Optimal topology of components leads to the maximum stiffness with the minimum material use. For the generation of these topologies, normally algorithms are employed, which tackle manufacturing limitations, at the cost of the optimal result. The global optimum result with penalty factor one, however, cannot be fabricated with conventional methods. In this article, an additive manufacturing method is introduced, in order to enable the production of global topology optimization results. For a benchmark, topology optimization with higher and lower penalty factors are performed. Different algorithms are employed in order to interpret the results of topology optimization with lower factors in many microstructure layers. These layers are then joined to form the final geometry. The algorithms’ benefits are then compared experimentally and numerically for the best interpretation. The findings demonstrate that by implementation of the selected algorithm, the stiffness of the components produced with this method is higher than what could have been produced by conventional techniques.Keywords: topology optimization, additive manufacturing, 3D-printer, laminated object manufacturing
Procedia PDF Downloads 34115409 New Estimation in Autoregressive Models with Exponential White Noise by Using Reversible Jump MCMC Algorithm
Authors: Suparman Suparman
Abstract:
A white noise in autoregressive (AR) model is often assumed to be normally distributed. In application, the white noise usually do not follows a normal distribution. This paper aims to estimate a parameter of AR model that has a exponential white noise. A Bayesian method is adopted. A prior distribution of the parameter of AR model is selected and then this prior distribution is combined with a likelihood function of data to get a posterior distribution. Based on this posterior distribution, a Bayesian estimator for the parameter of AR model is estimated. Because the order of AR model is considered a parameter, this Bayesian estimator cannot be explicitly calculated. To resolve this problem, a method of reversible jump Markov Chain Monte Carlo (MCMC) is adopted. A result is a estimation of the parameter AR model can be simultaneously calculated.Keywords: autoregressive (AR) model, exponential white Noise, bayesian, reversible jump Markov Chain Monte Carlo (MCMC)
Procedia PDF Downloads 35815408 Extension of the Simplified Theory of Plastic Zones for Analyzing Elastic Shakedown in a Multi-Dimensional Load Domain
Authors: Bastian Vollrath, Hartwig Hubel
Abstract:
In case of over-elastic and cyclic loading, strain may accumulate due to a ratcheting mechanism until the state of shakedown is possibly achieved. Load history dependent numerical investigations by a step-by-step analysis are rather costly in terms of engineering time and numerical effort. In the case of multi-parameter loading, where various independent loadings affect the final state of shakedown, the computational effort becomes an additional challenge. Therefore, direct methods like the Simplified Theory of Plastic Zones (STPZ) are developed to solve the problem with a few linear elastic analyses. Post-shakedown quantities such as strain ranges and cyclic accumulated strains are calculated approximately by disregarding the load history. The STPZ is based on estimates of a transformed internal variable, which can be used to perform modified elastic analyses, where the elastic material parameters are modified, and initial strains are applied as modified loading, resulting in residual stresses and strains. The STPZ already turned out to work well with respect to cyclic loading between two states of loading. Usually, few linear elastic analyses are sufficient to obtain a good approximation to the post-shakedown quantities. In a multi-dimensional load domain, the approximation of the transformed internal variable transforms from a plane problem into a hyperspace problem, where time-consuming approximation methods need to be applied. Therefore, a solution restricted to structures with four stress components was developed to estimate the transformed internal variable by means of three-dimensional vector algebra. This paper presents the extension to cyclic multi-parameter loading so that an unlimited number of load cases can be taken into account. The theoretical basis and basic presumptions of the Simplified Theory of Plastic Zones are outlined for the case of elastic shakedown. The extension of the method to many load cases is explained, and a workflow of the procedure is illustrated. An example, adopting the FE-implementation of the method into ANSYS and considering multilinear hardening is given which highlights the advantages of the method compared to incremental, step-by-step analysis.Keywords: cyclic loading, direct method, elastic shakedown, multi-parameter loading, STPZ
Procedia PDF Downloads 16415407 Chinese Sentence Level Lip Recognition
Authors: Peng Wang, Tigang Jiang
Abstract:
The computer based lip reading method of different languages cannot be universal. At present, for the research of Chinese lip reading, whether the work on data sets or recognition algorithms, is far from mature. In this paper, we study the Chinese lipreading method based on machine learning, and propose a Chinese Sentence-level lip-reading network (CNLipNet) model which consists of spatio-temporal convolutional neural network(CNN), recurrent neural network(RNN) and Connectionist Temporal Classification (CTC) loss function. This model can map variable-length sequence of video frames to Chinese Pinyin sequence and is trained end-to-end. More over, We create CNLRS, a Chinese Lipreading Dataset, which contains 5948 samples and can be shared through github. The evaluation of CNLipNet on this dataset yielded a 41% word correct rate and a 70.6% character correct rate. This evaluation result is far superior to the professional human lip readers, indicating that CNLipNet performs well in lipreading.Keywords: lipreading, machine learning, spatio-temporal, convolutional neural network, recurrent neural network
Procedia PDF Downloads 13015406 Vision Aided INS for Soft Landing
Authors: R. Sri Karthi Krishna, A. Saravana Kumar, Kesava Brahmaji, V. S. Vinoj
Abstract:
The lunar surface may contain rough and non-uniform terrain with dips and peaks. Soft-landing is a method of landing the lander on the lunar surface without any damage to the vehicle. This project focuses on finding a safe landing site for the vehicle by developing a method for the lateral velocity determination of the lunar lander. This is done by processing the real time images obtained by means of an on-board vision sensor. The hazard avoidance phase of the soft-landing starts when the vehicle is about 200 m above the lunar surface. Here, the lander has a very low velocity of about 10 cm/s:vertical and 5 m/s:horizontal. On the detection of a hazard the lander is navigated by controlling the vertical and lateral velocity. In order to find an appropriate landing site and to accordingly navigate, the lander image processing is performed continuously. The images are taken continuously until the landing site is determined, and the lander safely lands on the lunar surface. By integrating this vision-based navigation with the INS a better accuracy for the soft-landing of the lunar lander can be obtained.Keywords: vision aided INS, image processing, lateral velocity estimation, materials engineering
Procedia PDF Downloads 47015405 Numerical Investigation of Turbulent Inflow Strategy in Wind Energy Applications
Authors: Arijit Saha, Hassan Kassem, Leo Hoening
Abstract:
Ongoing climate change demands the increasing use of renewable energies. Wind energy plays an important role in this context since it can be applied almost everywhere in the world. To reduce the costs of wind turbines and to make them more competitive, simulations are very important since experiments are often too costly if at all possible. The wind turbine on a vast open area experiences the turbulence generated due to the atmosphere, so it was of utmost interest from this research point of view to generate the turbulence through various Inlet Turbulence Generation methods like Precursor cyclic and Kaimal Spectrum Exponential Coherence (KSEC) in the computational simulation domain. To be able to validate computational fluid dynamic simulations of wind turbines with the experimental data, it is crucial to set up the conditions in the simulation as close to reality as possible. This present work, therefore, aims at investigating the turbulent inflow strategy and boundary conditions of KSEC and providing a comparative analysis alongside the Precursor cyclic method for Large Eddy Simulation within the context of wind energy applications. For the generation of the turbulent box through KSEC method, firstly, the constrained data were collected from an auxiliary channel flow, and later processing was performed with the open-source tool PyconTurb, whereas for the precursor cyclic, only the data from the auxiliary channel were sufficient. The functionality of these methods was studied through various statistical properties such as variance, turbulent intensity, etc with respect to different Bulk Reynolds numbers, and a conclusion was drawn on the feasibility of KSEC method. Furthermore, it was found necessary to verify the obtained data with DNS case setup for its applicability to use it as a real field CFD simulation.Keywords: Inlet Turbulence Generation, CFD, precursor cyclic, KSEC, large Eddy simulation, PyconTurb
Procedia PDF Downloads 9915404 The Impact of Supply Chain Relationship Quality on Cooperative Strategy and Visibility
Authors: Jung-Hsuan Hsu
Abstract:
Due to intense competition within the industry, companies have increasingly recognized partnerships with other companies. In addition, with outsourcing and globalization of the supply chain, it leads to companies' increasing reliance on external resources. Consequently, supply chain network becomes complex, so that it reduces the visibility of the manufacturing process. Therefore, this study is going to focus on the impact of supply chain relationship quality (SCRQ) on cooperative strategy and visibility. Questionnaire survey is going to be conducted as research method, using the organic food industry as the research subject, and the sampling method is random sampling. Finally, the data analysis will use SPSS statistical software and AMOS software to analyze and verify the hypothesis. The expected results in this study is to evaluate the supply chain relationship quality between Taiwan's food manufacturing and their suppliers regarding whether it has a positive impact for the persistence, frequency and diversity of cooperative strategy, as well as the dimensions of supply chain relationship quality on visibility regarding whether it has a positive effect.Keywords: supply chain relationship quality (SCRQ), cooperative strategy, visibility, competition
Procedia PDF Downloads 45315403 Development and Validation of a Turbidimetric Bioassay to Determine the Potency of Ertapenem Sodium
Authors: Tahisa M. Pedroso, Hérida R. N. Salgado
Abstract:
The microbiological turbidimetric assay allows the determination of potency of the drug, by measuring the turbidity (absorbance), caused by inhibition of microorganisms by ertapenem sodium. Ertapenem sodium (ERTM), a synthetic antimicrobial agent of the class of carbapenems, shows action against Gram-negative, Gram-positive, aerobic and anaerobic microorganisms. Turbidimetric assays are described in the literature for some antibiotics, but this method is not described for ertapenem. The objective of the present study was to develop and validate a simple, sensitive, precise and accurate microbiological assay by turbidimetry to quantify ertapenem sodium injectable as an alternative to the physicochemical methods described in the literature. Several preliminary tests were performed to choose the following parameters: Staphylococcus aureus ATCC 25923, IAL 1851, 8 % of inoculum, BHI culture medium, and aqueous solution of ertapenem sodium. 10.0 mL of sterile BHI culture medium were distributed in 20 tubes. 0.2 mL of solutions (standard and test), were added in tube, respectively S1, S2 and S3, and T1, T2 and T3, 0.8 mL of culture medium inoculated were transferred to each tube, according parallel lines 3 x 3 test. The tubes were incubated in shaker Marconi MA 420 at a temperature of 35.0 °C ± 2.0 °C for 4 hours. After this period, the growth of microorganisms was inhibited by addition of 0.5 mL of 12% formaldehyde solution in each tube. The absorbance was determined in Quimis Q-798DRM spectrophotometer at a wavelength of 530 nm. An analytical curve was constructed to obtain the equation of the line by the least-squares method and the linearity and parallelism was detected by ANOVA. The specificity of the method was proven by comparing the response obtained for the standard and the finished product. The precision was checked by testing the determination of ertapenem sodium in three days. The accuracy was determined by recovery test. The robustness was determined by comparing the results obtained by varying wavelength, brand of culture medium and volume of culture medium in the tubes. Statistical analysis showed that there is no deviation from linearity in the analytical curves of standard and test samples. The correlation coefficients were 0.9996 and 0.9998 for the standard and test samples, respectively. The specificity was confirmed by comparing the absorbance of the reference substance and test samples. The values obtained for intraday, interday and between analyst precision were 1.25%; 0.26%, 0.15% respectively. The amount of ertapenem sodium present in the samples analyzed, 99.87%, is consistent. The accuracy was proven by the recovery test, with value of 98.20%. The parameters varied did not affect the analysis of ertapenem sodium, confirming the robustness of this method. The turbidimetric assay is more versatile, faster and easier to apply than agar diffusion assay. The method is simple, rapid and accurate and can be used in routine analysis of quality control of formulations containing ertapenem sodium.Keywords: ertapenem sodium, turbidimetric assay, quality control, validation
Procedia PDF Downloads 39415402 Sensing of Cancer DNA Using Resonance Frequency
Authors: Sungsoo Na, Chanho Park
Abstract:
Lung cancer is one of the most common severe diseases driving to the death of a human. Lung cancer can be divided into two cases of small-cell lung cancer (SCLC) and non-SCLC (NSCLC), and about 80% of lung cancers belong to the case of NSCLC. From several studies, the correlation between epidermal growth factor receptor (EGFR) and NSCLCs has been investigated. Therefore, EGFR inhibitor drugs such as gefitinib and erlotinib have been used as lung cancer treatments. However, the treatments result showed low response (10~20%) in clinical trials due to EGFR mutations that cause the drug resistance. Patients with resistance to EGFR inhibitor drugs usually are positive to KRAS mutation. Therefore, assessment of EGFR and KRAS mutation is essential for target therapies of NSCLC patient. In order to overcome the limitation of conventional therapies, overall EGFR and KRAS mutations have to be monitored. In this work, the only detection of EGFR will be presented. A variety of techniques has been presented for the detection of EGFR mutations. The standard detection method of EGFR mutation in ctDNA relies on real-time polymerase chain reaction (PCR). Real-time PCR method provides high sensitive detection performance. However, as the amplification step increases cost effect and complexity increase as well. Other types of technology such as BEAMing, next generation sequencing (NGS), an electrochemical sensor and silicon nanowire field-effect transistor have been presented. However, those technologies have limitations of low sensitivity, high cost and complexity of data analyzation. In this report, we propose a label-free and high-sensitive detection method of lung cancer using quartz crystal microbalance based platform. The proposed platform is able to sense lung cancer mutant DNA with a limit of detection of 1nM.Keywords: cancer DNA, resonance frequency, quartz crystal microbalance, lung cancer
Procedia PDF Downloads 23515401 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 14215400 Biodiversity of Pathogenic and Toxigenic Fungi Associated with Maize Grains Sampled across Egypt
Authors: Yasser Shabana, Khaled Ghoneem, Nehal Arafat, Younes Rashad, Dalia Aseel, Bruce Fitt, Aiming Qi, Benjamine Richard
Abstract:
Providing food for more than 100 million people is one of Egypt's main challenges facing development. The overall goal is to formulate strategies to enhance food security in light of population growth. Two hundred samples of maize grains from 25 governates were collected. For the detection of seed-borne fungi, the deep-freezing blotter method (DFB) and washing method (ISTA 1999) were used. A total of 41 fungal species was recovered from maize seed samples. Weather data from 30 stations scattered all over Egypt and covering the major maize growing areas were obtained. Canonical correspondence analysis of data for the obtained fungal genera with temperature, relative humidity, precipitation, wind speed, or solar radiation revealed that relative humidity, temperature and wind speed were the most influential weather variables.Keywords: biodiversity, climate change, maize, seed-borne fungi
Procedia PDF Downloads 16515399 A Multi-criteria Decision Support System for Migrating Legacies into Open Systems
Authors: Nasser Almonawer
Abstract:
Timely reaction to an evolving global business environment and volatile market conditions necessitates system and process flexibility, which in turn demands agile and adaptable architecture and a steady infusion of affordable new technologies. On the contrary, a large number of organizations utilize systems characterized by inflexible and obsolete legacy architectures. To effectively respond to the dynamic contemporary business environments, such architectures must be migrated to robust and modular open architectures. To this end, this paper proposes an integrated decision support system for a seamless migration to open systems. The proposed decision support system (DSS) integrates three well-established quantitative and qualitative decision-making models—namely, the Delphi method, Analytic Hierarchy Process (AHP) and Goal Programming (GP) to (1) assess risks and establish evaluation criteria; (2) formulate migration strategy and rank candidate systems; and (3) allocate resources among the selected systems.Keywords: decision support systems, open systems architecture, analytic hierarchy process (AHP), goal programming (GP), delphi method
Procedia PDF Downloads 5015398 The Foucaultian Relationship between Power and Knowledge: Genealogy as a Method for Epistemic Resistance
Authors: Jana Soler Libran
Abstract:
The primary aim of this paper is to analyze the relationship between power and knowledge suggested in Michel Foucault's theory. Taking into consideration the role of power in knowledge production, the goal is to evaluate to what extent genealogy can be presented as a practical method for epistemic resistance. To do so, the methodology used consists of a revision of Foucault’s literature concerning the topic discussed. In this sense, conceptual analysis is applied in order to understand the effect of the double dimension of power on knowledge production. In its negative dimension, power is conceived as an organ of repression, vetoing certain instances of knowledge considered deceitful. In opposition, in its positive dimension, power works as an organ of the production of truth by means of institutionalized discourses. This double declination of power leads to the first main findings of the present analysis: no truth or knowledge can lie outside power’s action, and power is constituted through accepted forms of knowledge. To second these statements, Foucaultian discourse formations are evaluated, presenting external exclusion procedures as paradigmatic practices to demonstrate how power creates and shapes the validity of certain epistemes. Thus, taking into consideration power’s mechanisms to produce and reproduce institutionalized truths, this paper accounts for the Foucaultian praxis of genealogy as a method to reveal power’s intention, instruments, and effects in the production of knowledge. In this sense, it is suggested to consider genealogy as a practice which, firstly, reveals what instances of knowledge are subjugated to power and, secondly, promotes aforementioned peripherical discourses as a form of epistemic resistance. In order to counterbalance these main theses, objections to Foucault’s work from Nancy Fraser, Linda Nicholson, Charles Taylor, Richard Rorty, Alvin Goldman, or Karen Barad are discussed. In essence, the understanding of the Foucaultian relationship between power and knowledge is essential to analyze how contemporary discourses are produced by both traditional institutions and new forms of institutionalized power, such as mass media or social networks. Therefore, Michel Foucault's practice of genealogy is relevant, not only for its philosophical contribution as a method to uncover the effects of power in knowledge production but also because it constitutes a valuable theoretical framework for political theory and sociological studies concerning the formation of societies and individuals in the contemporary world.Keywords: epistemic resistance, Foucault’s genealogy, knowledge, power, truth
Procedia PDF Downloads 12815397 Influence of the Quality Differences in the Same Type of Bitumen and Dosage Rate of Reclaimed Asphalt on Lifetime
Authors: Pahirangan Sivapatham, , Esser Barbara
Abstract:
The impacts of the asphalt mix design, the properties of aggregates and quality differences in the same type of bitumen, as well as the dosage rate of reclaimed asphalt on the relevant material parameter of the analytical pavement design method are not known. Due to that, in this study, the influence of the above mentioned characteristics on relevant material parameters has been determined and analyzed by means of the analytical pavement calculations method. Therefore, material parameters for several asphalt mixes for asphalt wearing course, asphalt binder course and asphalt base course have been determined. Thereby several bitumens of the same type from different producer’s have been used. In addition, asphalt base course materials with three different dosages of reclaimed asphalt have been produced and tested. As material parameter according to the German analytical pavement design guide(RDO Asphalt), the stiffness’s at different temperatures and fatigue behavior have been determined. The findings of asphalt base course materials produced with several pen graded bitumen from different producers and different dosages of reclaimed asphalt indicate the distinct impact on fatigue behaviors and mechanical properties. The calculated test results of the analytical pavement design method show significant differences in the lifetimes. The pavement design calculation is to carry out by means of the actual material parameter. The calculated lifetime of the asphalt base course materials differentiates by the factor 3.2. The determining test results of bitumen characteristics meet the requirement according to the German Standards. But, further investigations of bitumen in different aging conditions show significant differences in their quality. The fatigue behavior and stiffness of asphalt pavement improves with increasing dosage of reclaimed asphalt. Furthermore, the type of aggregates used shows no significant influences.Keywords: reclaimed asphalt pavement, quality differences in the bitumen, life time calculation, Asphalt mix with RAP
Procedia PDF Downloads 19015396 Functionally Graded MEMS Piezoelectric Energy Harvester with Magnetic Tip Mass
Authors: M. Derayatifar, M. Packirisamy, R.B. Bhat
Abstract:
Role of piezoelectric energy harvesters has gained interest in supplying power for micro devices such as health monitoring sensors. In this study, in order to enhance the piezoelectric energy harvesting in capturing energy from broader range of excitation and to improve the mechanical and electrical responses, bimorph piezoelectric energy harvester beam with magnetic mass attached at the end is presented. In view of overcoming the brittleness of piezo-ceramics, functionally graded piezoelectric layers comprising of both piezo-ceramic and piezo-polymer is employed. The nonlinear equations of motions are derived using energy method and then solved analytically using perturbation scheme. The frequency responses of the forced vibration case are obtained for the near resonance case. The nonlinear dynamic responses of the MEMS scaled functionally graded piezoelectric energy harvester in this paper may be utilized in different design scenarios to increase the efficiency of the harvester.Keywords: energy harvesting, functionally graded piezoelectric material, magnetic force, MEMS (micro-electro-mechanical systems) piezoelectric, perturbation method
Procedia PDF Downloads 19215395 Study and Solving Partial Differential Equation of Danel Equation in the Vibration Shells
Authors: Hesamoddin Abdollahpour, Roghayeh Abdollahpour, Elham Rahgozar
Abstract:
This paper we deal with an analysis of the free vibrations of the governing partial differential equation that it is Danel equation in the shells. The problem considered represents the governing equation of the nonlinear, large amplitude free vibrations of the hinged shell. A new implementation of the new method is presented to obtain natural frequency and corresponding displacement on the shell. Our purpose is to enhance the ability to solve the mentioned complicated partial differential equation (PDE) with a simple and innovative approach. The results reveal that this new method to solve Danel equation is very effective and simple, and can be applied to other nonlinear partial differential equations. It is necessary to mention that there are some valuable advantages in this way of solving nonlinear differential equations and also most of the sets of partial differential equations can be answered in this manner which in the other methods they have not had acceptable solutions up to now. We can solve equation(s), and consequently, there is no need to utilize similarity solutions which make the solution procedure a time-consuming task.Keywords: large amplitude, free vibrations, analytical solution, Danell Equation, diagram of phase plane
Procedia PDF Downloads 32415394 A New PWM Command for Cascaded H-Bridge Multilevel Increasing the Quality and Reducing Harmonics
Authors: Youssef Babkrani, S. Hiyani, A. Naddami, K. Choukri, M. Hilal
Abstract:
Power Quality has been a problem ever since electrical power was invented and in recent years, it has become the main interest of researchers who are still concerned about finding ways to reduce its negative influence on electrical devices. In this paper we aim to improve the power quality output for H- bridge multilevel inverter used with solar Photovoltaic (PV) panels, we propose a new switching technique that uses a pulse width modulation method (PWM) aiming to reduce the harmonics. This new method introduces a sinusoidal wave compared with modified trapezoidal carriers used to generate the pulses. This new trapezoid carrier waveform is being implemented with different sinusoidal PWM dispositions such as phase disposition (PWM PD), phase opposition disposition (PWM POD), and (PWM APOD) alternative phase opposition disposition and compared with the conventional ones. Using Matlab Simulink R2014a the line voltage and total harmonic distortions (THD) simulated and the quality are increased in spite of variations of DC introduced.Keywords: carrier waveform, phase disposition (PD), phase opposition disposition (POD), alternative phase opposition disposition (APOD), total harmonics distortion (THD)
Procedia PDF Downloads 28815393 A Mixed Method Systematic Review of the Experience of Communication in the Care of Children with Palliative Care Needs
Authors: Maha Atout, Pippa Hemingway, Jane Seymour
Abstract:
Background: A mixed method systematic review was undertaken in order to explore issues related to the experiences of health care providers and parents in the care of children with palliative care needs. The aims of this systematic review were to identify existing evidence about the experiences of communication in the care of children with palliative care needs, to appraise the research conducted in this area and to identify gaps in the literature in order to recommend for future related studies. Method: A mixed method systematic review of research on the experience of communication in the care of children with palliative care needs, conducted with parents and health professionals was undertaken. The electronic databases of CINAHL, Cochrane, PubMed, OVID, Social Care Online, Web of Science, Scopus, and ProQuest were searched for the period of 2000-2016. Inclusion was limited to studies of communication experience in the care of children with palliative care needs. Result: Thirty-eight studies were found. The studies were conducted in a variety of countries: Uganda, Jordan, USA, UK, Taiwan, Turkey, Ireland, Poland, Brazil, Australia, Switzerland, Sweden, Netherland, Lebanon, Spain, Greece, and China. The current review shows that parents tend to protect their children when they are discussing their illnesses with them, particularly where they have a life-threatening or life-limiting condition. The approach of parents towards the discussion of sensitive issues concerning death with their children is significantly affected by the cultural background of the families. Conservative cultures encourage collusion behaviours which tend to keep children unaware of the incurable nature of the disease. The major communication challenges reported by health professionals are facing difficulties in judging how much information should be given to parents, responding to difficult questions, conflicts with families and inadequate skills to support grieving families. Conclusion: It is probably significant for the future studies to consider the change of parent-child communication experience over time in order to understand how the parents could change their interaction styles with their children according to the different stages of their children’s disease. Moreover, further studies are required to investigate the experience of communication of parents of children with non-malignant life-threatening and life-limiting illnesses.Keywords: children with life-threatening or life- limiting illnesses, end of life, experience of communication, healthcare care providers, paediatric palliative care
Procedia PDF Downloads 29915392 Probability Sampling in Matched Case-Control Study in Drug Abuse
Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell
Abstract:
Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling
Procedia PDF Downloads 49515391 Safety Effect of Smart Right-Turn Design at Intersections
Authors: Upal Barua
Abstract:
The risk of severe crashes at high-speed right-turns at intersections is a major safety concern these days. The application of a smart right-turn at an intersection is increasing day by day to address is an issue. The design, ‘Smart Right-turn’ consists of a narrow-angle of channelization at approximately 70°. This design increases the cone of vision of the right-tuning drivers towards the crossing pedestrians as well as traffic on the cross-road. As part of the Safety Improvement Program in Austin Transportation Department, several smart right-turns were constructed at high crash intersections where high-speed right-turns were found to be a contributing factor. This paper features the state of the art techniques applied in planning, engineering, designing and construction of this smart right-turn, key factors driving the success, and lessons learned in the process. This paper also presents the significant crash reductions achieved from the application of this smart right-turn design using Empirical Bayes method. The result showed that smart right-turns can reduce overall right-turn crashes by 43% and severe right-turn crashes by 70%.Keywords: smart right-turn, intersection, cone of vision, empirical Bayes method
Procedia PDF Downloads 26915390 Clinical Study of the Prunus dulcis (Almond) Shell Extract on Tinea capitis Infection
Authors: Nasreen Thebo, W. Shaikh, A. J. Laghari, P. Nangni
Abstract:
Prunus dulcis (Almond) shell extract is demonstrated for its biomedical applications. Shell extract prepared by soxhlet method and further characterized by UV-Visible spectrophotometer, atomic absorption spectrophotometer (AAS), FTIR, GC-MS techniques. In this study, the antifungal activity of almond shell extract was observed against clinically isolated pathogenic fungi by strip method. The antioxidant potential of crude shell extract of was evaluated by using DPPH (2-2-diphenyl-1-picryhydrazyl) and radical scavenging system. The possibility of short term therapy was only 20 days. The total antioxidant activity varied from 94.38 to 95.49% and total phenolic content was found as 4.455 mg/gm in almond shell extract. Finally the results provide a great therapeutic potential against Tinea capitis infection of scalp. Included in this study of shell extract that show scientific evidence for clinical efficacy, as well as found to be more useful in the treatment of dermatologic disorders and without any doubt it can be recommended to be Patent.Keywords: Tinea capitis, DPPH, FTIR, GC-MS therapeutic treatment
Procedia PDF Downloads 38215389 Delimitation of the Perimeters of PR Otection of the Wellfield in the City of Adrar, Sahara of Algeria through the Used Wyssling’s Method
Authors: Ferhati Ahmed, Fillali Ahmed, Oulhadj Younsi
Abstract:
delimitation of the perimeters of protection in the catchment area of the city of Adrar, which are established around the sites for the collection of water intended for human consumption of drinking water, with the objective of ensuring the preservation and reducing the risks of point and accidental pollution of the resource (Continental Intercalar groundwater of the Northern Sahara of Algeria). This wellfield is located in the northeast of the city of Adrar, it covers an area of 132.56 km2 with 21 Drinking Water Supply wells (DWS), pumping a total flow of approximately 13 Hm3/year. The choice of this wellfield is based on the favorable hydrodynamic characteristics and their location in relation to the agglomeration. The vulnerability to pollution of this slick is very high because the slick is free and suffers from the absence of a protective layer. In recent years, several factors have been introduced around the field that can affect the quality of this precious resource, including the presence of a strong centre for domestic waste and agricultural and industrial activities. Thus, its sustainability requires the implementation of protection perimeters. The objective of this study is to set up three protection perimeters: immediate, close and remote. The application of the Wyssling method makes it possible to calculate the transfer time (t) of a drop of groundwater located at any point in the aquifer up to the abstraction and thus to define isochrones which in turn delimit each type of perimeter, 40 days for the nearer and 100 days for the farther away. Special restrictions are imposed for all activities depending on the distance of the catchment. The application of this method to the Adrar city catchment field showed that the close and remote protection perimeters successively occupy areas of 51.14 km2 and 92.9 km2. Perimeters are delimited by geolocated markers, 40 and 46 markers successively. These results show that the areas defined as "near protection perimeter" are free from activities likely to present a risk to the quality of the water used. On the other hand, on the areas defined as "remote protection perimeter," there is some agricultural and industrial activities that may present an imminent risk. A rigorous control of these activities and the restriction of the type of products applied in industrial and agricultural is imperative.Keywords: continental intercalaire, drinking water supply, groundwater, perimeter of protection, wyssling method
Procedia PDF Downloads 10015388 CE Method for Development of Japan's Stochastic Earthquake Catalogue
Authors: Babak Kamrani, Nozar Kishi
Abstract:
Stochastic catalog represents the events module of the earthquake loss estimation models. It includes series of events with different magnitudes and corresponding frequencies/probabilities. For the development of the stochastic catalog, random or uniform sampling methods are used to sample the events from the seismicity model. For covering all the Magnitude Frequency Distribution (MFD), a huge number of events should be generated for the above-mentioned methods. Characteristic Event (CE) method chooses the events based on the interest of the insurance industry. We divide the MFD of each source into bins. We have chosen the bins based on the probability of the interest by the insurance industry. First, we have collected the information for the available seismic sources. Sources are divided into Fault sources, subduction, and events without specific fault source. We have developed the MFD for each of the individual and areal source based on the seismicity of the sources. Afterward, we have calculated the CE magnitudes based on the desired probability. To develop the stochastic catalog, we have introduced uncertainty to the location of the events too.Keywords: stochastic catalogue, earthquake loss, uncertainty, characteristic event
Procedia PDF Downloads 30215387 Study on Robot Trajectory Planning by Robot End-Effector Using Dual Curvature Theory of the Ruled Surface
Authors: Y. S. Oh, P. Abhishesh, B. S. Ryuh
Abstract:
This paper presents the method of trajectory planning by the robot end-effector which accounts for more accurate and smooth differential geometry of the ruled surface generated by tool line fixed with end-effector based on the methods of curvature theory of ruled surface and the dual curvature theory, and focuses on the underlying relation to unite them for enhancing the efficiency for trajectory planning. Robot motion can be represented as motion properties of the ruled surface generated by trajectory of the Tool Center Point (TCP). The linear and angular properties of the six degree-of-freedom motion of end-effector are computed using the explicit formulas and functions from curvature theory and dual curvature theory. This paper explains the complete dualization of ruled surface and shows that the linear and angular motion applied using the method of dual curvature theory is more accurate and less complex.Keywords: dual curvature theory, robot end effector, ruled surface, TCP (Tool Center Point)
Procedia PDF Downloads 36715386 Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition
Authors: Khadijat T. Bamigbade, Olufade F. W. Onifade
Abstract:
The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild.Keywords: automatic facial expression analysis, local binary pattern, LBP-HOG, occlusion detection
Procedia PDF Downloads 173