Search results for: non-linear measure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4454

Search results for: non-linear measure

3854 Extended Intuitionistic Fuzzy VIKOR Method in Group Decision Making: The Case of Vendor Selection Decision

Authors: Nastaran Hajiheydari, Mohammad Soltani Delgosha

Abstract:

Vendor (supplier) selection is a group decision-making (GDM) process, in which, based on some predetermined criteria, the experts’ preferences are provided in order to rank and choose the most desirable suppliers. In the real business environment, our attitudes or our choices would be made in an uncertain and indecisive situation could not be expressed in a crisp framework. Intuitionistic fuzzy sets (IFSs) could handle such situations in the best way. VIKOR method was developed to solve multi-criteria decision-making (MCDM) problems. This method, which is used to determine the compromised feasible solution with respect to the conflicting criteria, introduces a multi-criteria ranking index based on the particular measure of 'closeness' to the 'ideal solution'. Until now, there has been a little investigation of VIKOR with IFS, therefore we extended the intuitionistic fuzzy (IF) VIKOR to solve vendor selection problem under IF GDM environment. The present study intends to develop an IF VIKOR method in a GDM situation. Therefore, a model is presented to calculate the criterion weights based on entropy measure. Then, the interval-valued intuitionistic fuzzy weighted geometric (IFWG) operator utilized to obtain the total decision matrix. In the next stage, an approach based on the positive idle intuitionistic fuzzy number (PIIFN) and negative idle intuitionistic fuzzy number (NIIFN) was developed. Finally, the application of the proposed method to solve a vendor selection problem illustrated.

Keywords: group decision making, intuitionistic fuzzy set, intuitionistic fuzzy entropy measure, vendor selection, VIKOR

Procedia PDF Downloads 137
3853 Dissipation Capacity of Steel Building with Fiction Pendulum Base-Isolation System

Authors: A. Ras, I. Nait Zerrad, N. Benmouna, N. Boumechra

Abstract:

Use of base isolators in the seismic design of structures has attracted considerable attention in recent years. The major concern in the design of these structures is to have enough lateral stability to resist wind and seismic forces. There are different systems providing such isolation, among them there are friction- pendulum base isolation systems (FPS) which are rather widely applied nowadays involving to both affordable cost and high fundamental periods. These devices are characterised by a stiff resistance against wind loads and to be flexible to the seismic tremors, which make them suitable for different situations. In this paper, a 3D numerical investigation is done considering the seismic response of a twelve-storey steel building retrofitted with a FPS. Fast nonlinear time history analysis (FNA) of Boumerdes earthquake (Algeria, May 2003) is considered for analysis and carried out using SAP2000 software. Comparisons between fixed base, bearing base isolated and braced structures are shown in a tabulated and graphical format. The results of the various alternatives studies to compare the structural response without and with this device of dissipation energy thus obtained were discussed and the conclusions showed the interesting potential of the FPS isolator. This system may to improve the dissipative capacities of the structure without increasing its rigidity in a significant way which contributes to optimize the quantity of steel necessary for its general stability.

Keywords: energy dissipation, friction-pendulum system, nonlinear analysis, steel structure

Procedia PDF Downloads 189
3852 Investigating Jacket-Type Offshore Structures Failure Probability by Applying the Reliability Analyses Methods

Authors: Majid Samiee Zonoozian

Abstract:

For such important constructions as jacket type platforms, scrupulous attention in analysis, design and calculation processes is needed. The reliability assessment method has been established into an extensively used method to behavior safety calculation of jacket platforms. In the present study, a methodology for the reliability calculation of an offshore jacket platform in contradiction of the extreme wave loading state is available. Therefore, sensitivity analyses are applied to acquire the nonlinear response of jacket-type platforms against extreme waves. The jacket structure is modeled by applying a nonlinear finite-element model with regards to the tubular members' behave. The probability of a member’s failure under extreme wave loading is figured by a finite-element reliability code. The FORM and SORM approaches are applied for the calculation of safety directories and reliability indexes have been detected. A case study for a fixed jacket-type structure positioned in the Persian Gulf is studied by means of the planned method. Furthermore, to define the failure standards, equations suggested by the 21st version of the API RP 2A-WSD for The jacket-type structures’ tubular members designing by applying the mixed axial bending and axial pressure. Consequently, the effect of wave Loades in the reliability index was considered.

Keywords: Jacket-Type structure, reliability, failure probability, tubular members

Procedia PDF Downloads 156
3851 The Influence of Shear Wall Position on Seismic Performance in Buildings

Authors: Akram Khelaifia, Nesreddine Djafar Henni

Abstract:

Reinforced concrete shear walls are essential components in protecting buildings from seismic forces by providing both strength and stiffness. This study focuses on optimizing the placement of shear walls in a high seismic zone. Through nonlinear analyses conducted on an eight-story building, various scenarios of shear wall positions are investigated to evaluate their impact on seismic performance. Employing a performance-based seismic design (PBSD) approach, the study aims to meet acceptance criteria related to inter-story drift ratio and damage levels. The findings emphasize the importance of concentrating shear walls in the central area of the building during the design phase. This strategic placement proves more effective compared to peripheral distributions, resulting in reduced inter-story drift and mitigated potential damage during seismic events. Additionally, the research explores the use of shear walls that completely infill the frame, forming compound shapes like Box configurations. It is discovered that incorporating such complete shear walls significantly enhances the structure's reliability concerning inter-story drift. Conversely, the absence of complete shear walls within the frame leads to reduced stiffness and the potential deterioration of short beams.

Keywords: performance level, pushover analysis, shear wall, plastic hinge, nonlinear analyses

Procedia PDF Downloads 34
3850 FRATSAN: A New Software for Fractal Analysis of Signals

Authors: Hamidreza Namazi

Abstract:

Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.

Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure

Procedia PDF Downloads 451
3849 Globalisation, Growth and Sustainability in Sub-Saharan Africa

Authors: Ourvashi Bissoon

Abstract:

Sub-Saharan Africa in addition to being resource rich is increasingly being seen as having a huge growth potential and as a result, is increasingly attracting MNEs on its soil. To empirically assess the effectiveness of GDP in tracking sustainable resource use and the role played by MNEs in Sub-Saharan Africa, a panel data analysis has been undertaken for 32 countries over thirty-five years. The time horizon spans the period 1980-2014 to reflect the evolution from before the publication of the pioneering Brundtland report on sustainable development to date. Multinationals’ presence is proxied by the level of FDI stocks. The empirical investigation first focuses on the impact of trade openness and MNE presence on the traditional measure of economic growth namely the GDP growth rate, and then on the genuine savings (GS) rate, a measure of weak sustainability developed by the World Bank, which assumes the substitutability between different forms of capital and finally, the impact on the adjusted Net National Income (aNNI), a measure of green growth which caters for the depletion of natural resources is examined. For countries with significant exhaustible natural resources and important foreign investor presence, the adjusted net national income (aNNI) can be a better indicator of economic performance than GDP growth (World Bank, 2010). The issue of potential endogeneity and reverse causality is also addressed in addition to robustness tests. The findings indicate that FDI and openness contribute significantly and positively to the GDP growth of the countries in the sample; however there is a threshold level of institutional quality below which FDI has a negative impact on growth. When the GDP growth rate is substituted for the GS rate, a natural resource curse becomes evident. The rents being generated from the exploitation of natural resources are not being re-invested into other forms of capital namely human and physical capital. FDI and trade patterns may be setting the economies in the sample on a unsustainable path of resource depletion. The resource curse is confirmed when utilising the aNNI as well, thus implying that GDP growth measure may not be a reliable to capture sustainable development.

Keywords: FDI, sustainable development, genuine savings, sub-Saharan Africa

Procedia PDF Downloads 200
3848 Analysis of Nonlinear Dynamic Systems Excited by Combined Colored and White Noise Excitations

Authors: Siu-Siu Guo, Qingxuan Shi

Abstract:

In this paper, single-degree-of-freedom (SDOF) systems to white noise and colored noise excitations are investigated. By expressing colored noise excitation as a second-order filtered white noise process and introducing colored noise as an additional state variable, the equation of motion for SDOF system under colored noise is then transferred artificially to multi-degree-of-freedom (MDOF) system under white noise excitations. As a consequence, corresponding Fokker-Planck-Kolmogorov (FPK) equation governing the joint probabilistic density function (PDF) of state variables increases to 4-dimension (4-D). Solution procedure and computer programme become much more sophisticated. The exponential-polynomial closure (EPC) method, widely applied for cases of SDOF systems under white noise excitations, is developed and improved for cases of systems under colored noise excitations and for solving the complex 4-D FPK equation. On the other hand, Monte Carlo simulation (MCS) method is performed to test the approximate EPC solutions. Two examples associated with Gaussian and non-Gaussian colored noise excitations are considered. Corresponding band-limited power spectral densities (PSDs) for colored noise excitations are separately given. Numerical studies show that the developed EPC method provides relatively accurate estimates of the stationary probabilistic solutions. Moreover, statistical parameter of mean-up crossing rate (MCR) is taken into account, which is important for reliability and failure analysis.

Keywords: filtered noise, narrow-banded noise, nonlinear dynamic, random vibration

Procedia PDF Downloads 213
3847 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures

Authors: A. T. Al-Isawi, P. E. F. Collins

Abstract:

The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.

Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction

Procedia PDF Downloads 112
3846 Dynamics of Mach Zehnder Modulator in Open and Closed Loop Bias Condition

Authors: Ramonika Sengupta, Stuti Kachhwaha, Asha Adhiya, K. Satya Raja Sekhar, Rajwinder Kaur

Abstract:

Numerous efforts have been done in the past decade to develop the methods of secure communication that are free from interception and eavesdropping. In fiber optic communication, chaotic optical carrier signals are used for data encryption in secure data transmission. Mach-Zehnder Modulators (MZM) are the key components for generating the chaotic signals to be used as optical carriers. This paper presents the dynamics of a lithium niobate MZM modulator under various biasing conditions. The chaotic fluctuations of the intensity of a laser diode have been generated using the electro-optic MZM modulator operating in a highly nonlinear regime. The modulator is driven in closed loop by its own output at an earlier time. When used as an electro-optic oscillator employing delayed feedback, the MZM displays a wide range of output waveforms of varying complexity. The dynamical behavior of the system ranges from periodic to nonlinear oscillations. The nonlinearity displayed by the system is reproducible and is easily controllable. In this paper, we demonstrate a wide variety of optical signals generated by MZM using easily controllable device parameters in both open and close loop bias conditions.

Keywords: chaotic carrier, fiber optic communication, Mach-Zehnder modulator, secure data transmission

Procedia PDF Downloads 254
3845 A Study on an Evacuation Test to Measure Delay Time in Using an Evacuation Elevator

Authors: Kyungsuk Cho, Seungun Chae, Jihun Choi

Abstract:

Elevators are examined as one of evacuation methods in super-tall buildings. However, data on the use of elevators for evacuation at a fire are extremely scarce. Therefore, a test to measure delay time in using an evacuation elevator was conducted. In the test, time taken to get on and get off an elevator was measured and the case in which people gave up boarding when the capacity of the elevator was exceeded was also taken into consideration. 170 men and women participated in the test, 130 of whom were young people (20 ~ 50 years old) and 40 were senior citizens (over 60 years old). The capacity of the elevator was 25 people and it travelled between the 2nd and 4th floors. A video recording device was used to analyze the test. An elevator at an ordinary building, not a super-tall building, was used in the test to measure delay time in getting on and getting off an elevator. In order to minimize interference from other elements, elevator platforms on the 2nd and 4th floors were partitioned off. The elevator travelled between the 2nd and 4th floors where people got on and off. If less than 20 people got on the elevator which was empty, the data were excluded. If the elevator carrying 10 passengers stopped and less than 10 new passengers got on the elevator, the data were excluded. Getting-on an empty elevator was observed 49 times. The average number of passengers was 23.7, it took 14.98 seconds for the passengers to get on the empty elevator and the load factor was 1.67 N/s. It took the passengers, whose average number was 23.7, 10.84 seconds to get off the elevator and the unload factor was 2.33 N/s. When an elevator’s capacity is exceeded, the excessive number of people should get off. Time taken for it and the probability of the case were measure in the test. 37% of the times of boarding experienced excessive number of people. As the number of people who gave up boarding increased, the load factor of the ride decreased. When 1 person gave up boarding, the load factor was 1.55 N/s. The case was observed 10 times, which was 12.7% of the total. When 2 people gave up boarding, the load factor was 1.15 N/s. The case was observed 7 times, which was 8.9% of the total. When 3 people gave up boarding, the load factor was 1.26 N/s. The case was observed 4 times, which was 5.1% of the total. When 4 people gave up boarding, the load factor was 1.03 N/s. The case was observed 5 times, which was 6.3% of the total. Getting-on and getting-off time data for people who can walk freely were obtained from the test. In addition, quantitative results were obtained from the relation between the number of people giving up boarding and time taken for getting on. This work was supported by the National Research Council of Science & Technology (NST) grant by the Korea government (MSIP) (No. CRC-16-02-KICT).

Keywords: evacuation elevator, super tall buildings, evacuees, delay time

Procedia PDF Downloads 161
3844 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 482
3843 Resume Ranking Using Custom Word2vec and Rule-Based Natural Language Processing Techniques

Authors: Subodh Chandra Shakya, Rajendra Sapkota, Aakash Tamang, Shushant Pudasaini, Sujan Adhikari, Sajjan Adhikari

Abstract:

Lots of efforts have been made in order to measure the semantic similarity between the text corpora in the documents. Techniques have been evolved to measure the similarity of two documents. One such state-of-art technique in the field of Natural Language Processing (NLP) is word to vector models, which converts the words into their word-embedding and measures the similarity between the vectors. We found this to be quite useful for the task of resume ranking. So, this research paper is the implementation of the word2vec model along with other Natural Language Processing techniques in order to rank the resumes for the particular job description so as to automate the process of hiring. The research paper proposes the system and the findings that were made during the process of building the system.

Keywords: chunking, document similarity, information extraction, natural language processing, word2vec, word embedding

Procedia PDF Downloads 137
3842 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 244
3841 Relation between Chronic Mechanical Low Back Pain and Hip Rotation

Authors: Mohamed M. Diab, Koura G. Mohamed, A. Balbaa, Radwan Sh. Ahamed

Abstract:

Background: Chronic mechanical low back pain (CMLBP) is the most common complaint of the working-age population. Mechanical low back pain is often a chronic, dull, aching pain of varying intensity that affects the lower spine. In the current proposal the hip rotation-CMLBP relationship is based on that limited hip motion will be compensated by motion in the lumbopelvic region and this increase force translates to the lumbar spine. The purpose of this study was to investigate if there a relationship between chronic mechanical low back pain (CMLBP) and hip medial and lateral rotation (peak torque and Range of motion (ROM) in patients with CMLBP. Methods: Sixty patients with CMLBP diagnosed by an orthopedist participated in the current study after signing a consent form. Their mean of age was (23.76±2.39) years, mean of weight (71.8±12.7) (Kg), mean of height (169.65±7.49) (Cm) and mean of BMI (25.5±3.86) (Kg/m2). Visual Analogue Scale (VAS) was used to assess pain. Fluid Filled Inclinometer was used to measure Hip rotation ROM (medial and lateral). Isokinetic Dynamometer was used to measure peak torque of hip rotators muscles (medial and lateral), concentric peak torque with tow Isokinetic speeds (60ᵒ/sec and 180ᵒ/sec) was selected to measure peak torque. Results: The results of this study demonstrated that there is poor relationship between pain and hip external rotation ROM, also there is poor relation between pain and hip internal rotation ROM. There is poor relation between pain and hip internal rotators peak torque and hip external rotators peak torque in both speeds. Conclusion: Depending on the current study it is not recommended to give an importance to hip rotation in treating Chronic Mechanical Low Back Pain.

Keywords: hip rotation ROM, hip rotators strength, low back pain, chronic mechanical

Procedia PDF Downloads 289
3840 Numerical Solution of Space Fractional Order Linear/Nonlinear Reaction-Advection Diffusion Equation Using Jacobi Polynomial

Authors: Shubham Jaiswal

Abstract:

During modelling of many physical problems and engineering processes, fractional calculus plays an important role. Those are greatly described by fractional differential equations (FDEs). So a reliable and efficient technique to solve such types of FDEs is needed. In this article, a numerical solution of a class of fractional differential equations namely space fractional order reaction-advection dispersion equations subject to initial and boundary conditions is derived. In the proposed approach shifted Jacobi polynomials are used to approximate the solutions together with shifted Jacobi operational matrix of fractional order and spectral collocation method. The main advantage of this approach is that it converts such problems in the systems of algebraic equations which are easier to be solved. The proposed approach is effective to solve the linear as well as non-linear FDEs. To show the reliability, validity and high accuracy of proposed approach, the numerical results of some illustrative examples are reported, which are compared with the existing analytical results already reported in the literature. The error analysis for each case exhibited through graphs and tables confirms the exponential convergence rate of the proposed method.

Keywords: space fractional order linear/nonlinear reaction-advection diffusion equation, shifted Jacobi polynomials, operational matrix, collocation method, Caputo derivative

Procedia PDF Downloads 430
3839 Empirical Exploration of Correlations between Software Design Measures: A Replication Study

Authors: Jehad Al Dallal

Abstract:

Software engineers apply different measures to quantify the quality of software design. These measures consider artifacts developed at low or high level software design phases. The results are used to point to design weaknesses and to indicate design points that have to be restructured. Understanding the relationship among the quality measures and among the design quality aspects considered by these measures is important to interpreting the impact of a measure for a quality aspect on other potentially related aspects. In addition, exploring the relationship between quality measures helps to explain the impact of different quality measures on external quality aspects, such as reliability and maintainability. In this paper, we report a replication study that empirically explores the correlation between six well known and commonly applied design quality measures. These measures consider several quality aspects, including complexity, cohesion, coupling, and inheritance. The results indicate that inheritance measures are weakly correlated to other measures, whereas complexity, coupling, and cohesion measures are mostly strongly correlated.  

Keywords: quality attribute, quality measure, software design quality, Spearman correlation

Procedia PDF Downloads 281
3838 Portable, Noninvasive and Wireless Near Infrared Spectroscopy Device to Monitor Skeletal Muscle Metabolism during Exercise

Authors: Adkham Paiziev, Fikrat Kerimov

Abstract:

Near Infrared Spectroscopy (NIRS) is one of the biophotonic techniques which can be used to monitor oxygenation and hemodynamics in a variety of human tissues, including skeletal muscle. In the present work, we are offering tissue oximetry (OxyPrem) to measure hemodynamic parameters of skeletal muscles in rest and exercise. Purpose: - To elaborate the new wireless, portable, noninvasive, wearable NIRS device to measure skeletal muscle oxygenation during exercise. - To test this device on brachioradialis muscle of wrestler volunteers by using combined method of arterial occlusion (AO) and NIRS (AO+NIRS). Methods: Oxyprem NIRS device has been used together with AO test. AO test and Isometric brachioradialis muscle contraction experiments have been performed on one group of wrestler volunteers. ‘Accu- Measure’ caliper (USA) to measure skinfold thickness (SFT) has been used. Results: Elaborated device consists on power supply box, a sensor head and installed ‘Tubis’ software for data acquisition and to compute deoxyhemoglobin ([HHb), oxyhemoglobin ([O2Hb]), tissue oxygenation (StO2) and muscle tissue oxygen consumption (mVO2). Sensor head consists on four light sources with three light emitting diodes with nominal wavelengths of 760 nm, 805 nm, and 870 nm, and two detectors. AO and isometric voluntary forearm muscle contraction (IVFMC) on five healthy male subjects (23,2±0.84 in age, 0.43±0.05cm of SFT ) and four female subjects (22.0±1.0 in age and 0.24±0.04 cm SFT) has been measured. mVO2 for control group has been calculated (-0.65%/sec±0.07) for male and -0.69%/±0.19 for female subjects). Tissue oxygenation index for wrestlers in average about 75% whereas for control group StO2 =63%. Second experiment was connected with quality monitoring muscle activity during IVFMC at 10%,30% and 50% of MVC. It has been shown, that the concentration changes of HbO2 and HHb positively correlated to the contraction intensity. Conclusion: We have presented a portable multi-channel wireless NIRS device for real-time monitoring of muscle activity. The miniaturized NIRS sensor and the usage of wireless communication make the whole device have a compact-size, thus can be used in muscle monitoring.

Keywords: skeletal muscle, oxygenation, instrumentation, near infrared spectroscopy

Procedia PDF Downloads 262
3837 Q-Test of Undergraduate Epistemology and Scientific Thought: Development and Testing of an Assessment of Scientific Epistemology

Authors: Matthew J. Zagumny

Abstract:

The QUEST is an assessment of scientific epistemic beliefs and was developed to measure students’ intellectual development in regards to beliefs about knowledge and knowing. The QUEST utilizes Q-sort methodology, which requires participants to rate the degree to which statements describe them personally. As a measure of personal theories of knowledge, the QUEST instrument is described with the Q-sort distribution and scoring explained. A preliminary demonstration of the QUEST assessment is described with two samples of undergraduate students (novice/lower division compared to advanced/upper division students) being assessed and their average QUEST scores compared. The usefulness of an assessment of epistemology is discussed in terms of the principle that assessment tends to drive educational practice and university mission. The critical need for university and academic programs to focus on development of students’ scientific epistemology is briefly discussed.

Keywords: scientific epistemology, critical thinking, Q-sort method, STEM undergraduates

Procedia PDF Downloads 364
3836 Bi-Axial Stress Effects on Barkhausen-Noise

Authors: G. Balogh, I. A. Szabó, P.Z. Kovács

Abstract:

Mechanical stress has a strong effect on the magnitude of the Barkhausen-noise in structural steels. Because the measurements are performed at the surface of the material, for a sample sheet, the full effect can be described by a biaxial stress field. The measured Barkhausen-noise is dependent on the orientation of the exciting magnetic field relative to the axis of the stress tensor. The sample inhomogenities including the residual stress also modifies the angular dependence of the measured Barkhausen-noise. We have developed a laboratory device with a cross like specimen for bi-axial bending. The measuring head allowed performing excitations in two orthogonal directions. We could excite the two directions independently or simultaneously with different amplitudes. The simultaneous excitation of the two coils could be performed in phase or with a 90 degree phase shift. In principle this allows to measure the Barkhausen-noise at an arbitrary direction without moving the head, or to measure the Barkhausen-noise induced by a rotating magnetic field if a linear superposition of the two fields can be assumed.

Keywords: Barkhausen-noise, bi-axial stress, stress measuring, stress dependency

Procedia PDF Downloads 279
3835 Bienzymatic Nanocomposites Biosensors Complexed with Gold Nanoparticles, Polyaniline, Recombinant MN Peroxidase from Corn, and Glucose Oxidase to Measure Glucose

Authors: Anahita Izadyar

Abstract:

Using a recombinant enzyme derived from corn and a simple modification, we are fabricating a facile, fast, and cost-beneficial novel biosensor to measure glucose. We are applying Plant Produced Mn Peroxidase (PPMP), glucose oxidase (GOx), polyaniline (PANI) as conductive polymer and gold nanoparticles (AuNPs) on Au electrode using electrochemical response to detect glucose. We applied the entrapment method of enzyme composition, which is generally used to immobilize conductive polymer and facilitate electron transfer from the enzyme oxidation-reduction center to the sample solution. In this work, the oxidation of glucose on the modified gold electrode was quantified with Linear Sweep Voltammetry(LSV). We expect that the modified biosensor has the potential for monitoring various biofluids.

Keywords: plant-produced manganese peroxidase, enzyme-based biosensors, glucose, modified gold nanoparticles electrode, polyaniline

Procedia PDF Downloads 179
3834 Lean Impact Analysis Assessment Models: Development of a Lean Measurement Structural Model

Authors: Catherine Maware, Olufemi Adetunji

Abstract:

The paper is aimed at developing a model to measure the impact of Lean manufacturing deployment on organizational performance. The model will help industry practitioners to assess the impact of implementing Lean constructs on organizational performance. It will also harmonize the measurement models of Lean performance with the house of Lean that seems to have become the industry standard. The sheer number of measurement models for impact assessment of Lean implementation makes it difficult for new adopters to select an appropriate assessment model or deployment methodology. A literature review is conducted to classify the Lean performance model. Pareto analysis is used to select the Lean constructs for the development of the model. The model is further formalized through the use of Structural Equation Modeling (SEM) in defining the underlying latent structure of a Lean system. An impact assessment measurement model developed can be used to measure Lean performance and can be adopted by different industries.

Keywords: impact measurement model, lean bundles, lean manufacturing, organizational performance

Procedia PDF Downloads 465
3833 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 476
3832 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 281
3831 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem

Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly

Abstract:

We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.

Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard

Procedia PDF Downloads 506
3830 The Effects of Prosthetic Leg Stiffness on Gait, Comfort, and Satisfaction: A Review of Mechanical Engineering Approaches

Authors: Kourosh Fatehi, Niloofar Hanafi

Abstract:

One of the challenges in providing optimal prosthetic legs for lower limb amputees is to select the appropriate foot stiffness that suits their individual needs and preferences. Foot stiffness affects various aspects of walking, such as stability, comfort, and energy expenditure. However, the current prescription process is largely based on trial-and-error, manufacturer recommendations, or clinician judgment, which may not reflect the prosthesis user’s subjective experience or psychophysical sensitivity. Therefore, there is a need for more scientific and technological tools to measure and understand how prosthesis users perceive and prefer different foot stiffness levels, and how this preference relates to clinical outcomes. This review covers how to measure and design lower leg prostheses based on user preference and foot stiffness. It also explores how these factors affect walking outcomes and quality of life, and identifies the current challenges and gaps in this field from a mechanical engineering standpoint.

Keywords: perception, preference, prosthetics, stiffness

Procedia PDF Downloads 60
3829 Research Attitude: Its Factor Structure and Determinants in the Graduate Level

Authors: Janet Lynn S. Montemayor

Abstract:

Dropping survivability and rising drop-out rate in the graduate school is attributed to the demands that come along with research-related requirements. Graduate students tend to withdraw from their studies when confronted with such requirements. This act of succumbing to the challenge is primarily due to a negative mindset. An understanding of students’ view towards research is essential for teachers in facilitating research activities in the graduate school. This study aimed to develop a tool that accurately measures attitude towards research. Psychometric properties of the Research Attitude Inventory (RAIn) was assessed. A pool of items (k=50) was initially constructed and was administered to a development sample composed of Masters and Doctorate degree students (n=159). Results show that the RAIn is a reliable measure of research attitude (k=41, αmax = 0.894). Principal component analysis using orthogonal rotation with Kaiser normalization identified four underlying factors of research attitude, namely predisposition, purpose, perspective, and preparation. Research attitude among the respondents was analyzed using this measure.

Keywords: graduate education, principal component analysis, research attitude, scale development

Procedia PDF Downloads 182
3828 Probability Fuzzy Aggregation Operators in Vehicle Routing Problem

Authors: Anna Sikharulidze, Gia Sirbiladze

Abstract:

For the evaluation of unreliability levels of movement on the closed routes in the vehicle routing problem, the fuzzy operators family is constructed. The interactions between routing factors in extreme conditions on the roads are considered. A multi-criteria decision-making model (MCDM) is constructed. Constructed aggregations are based on the Choquet integral and the associated probability class of a fuzzy measure. Propositions on the correctness of the extension are proved. Connections between the operators and the compositions of dual triangular norms are described. The conjugate connections between the constructed operators are shown. Operators reflect interactions among all the combinations of the factors in the fuzzy MCDM process. Several variants of constructed operators are used in the decision-making problem regarding the assessment of unreliability and possibility levels of movement on closed routes.

Keywords: vehicle routing problem, associated probabilities of a fuzzy measure, choquet integral, fuzzy aggregation operator

Procedia PDF Downloads 305
3827 Empirical Investigation for the Correlation between Object-Oriented Class Lack of Cohesion and Coupling

Authors: Jehad Al Dallal

Abstract:

The design of the internal relationships among object-oriented class members (i.e., attributes and methods) and the external relationships among classes affects the overall quality of the object-oriented software. The degree of relatedness among class members is referred to as class cohesion and the degree to which a class is related to other classes is called class coupling. Well designed classes are expected to exhibit high cohesion and low coupling values. In this paper, using classes of three open-source Java systems, we empirically investigate the relation between class cohesion and coupling. In the empirical study, five lack-of-cohesion metrics and eight coupling metrics are considered. The empirical study results show that class cohesion and coupling internal quality attributes are inversely correlated. The strength of the correlation highly depends on the cohesion and coupling measurement approaches.

Keywords: class cohesion measure, class coupling measure, object-oriented class, software quality

Procedia PDF Downloads 218
3826 Development of the New York Misophonia Scale: Implications for Diagnostic Criteria

Authors: Usha Barahmand, Maria Stalias, Abdul Haq, Esther Rotlevi, Ying Xiang

Abstract:

Misophonia is a condition in which specific repetitive oral, nasal, or other sounds and movements made by humans trigger impulsive aversive reactions of irritation or disgust that instantly become anger. A few measures exist for the assessment of misophonia, but each has some limitations, and evidence for a formal diagnosis is still lacking. The objective of this study was to develop a reliable and valid measure of misophonia for use in the general population. Adopting a purely descriptive approach, this study focused on developing a self-report measure using all triggers and reactions identified in previous studies on misophonia. A measure with two subscales, one assessing the aversive quality of various triggers and the other assessing reactions of individuals, was developed. Data were gathered from a large sample of both men and women ranging in age from 18 to 65 years. Exploratory factor analysis revealed three main triggers: oral/nasal sounds, hand and leg movements, and environmental sounds. Two clusters of reactions also emerged: nonangry attempts to avoid the impact of the aversive stimuli and angry attempts to stop the aversive stimuli. The examination of the psychometric properties of the scale revealed its internal consistency and test-retest reliability to be excellent. The scale was also found to have very good concurrent and convergent validity. Significant annoyance and disgust in response to the triggers were reported by 12% of the sample, although for some specific triggers, rates as high as 31% were also reported. These findings have implications for the delineation of the criteria for identifying misophonia as a clinical condition.

Keywords: adults, factor analysis, misophonia, psychometric properties, scale

Procedia PDF Downloads 178
3825 Valuation of Caps and Floors in a LIBOR Market Model with Markov Jump Risks

Authors: Shih-Kuei Lin

Abstract:

The characterization of the arbitrage-free dynamics of interest rates is developed in this study under the presence of Markov jump risks, when the term structure of the interest rates is modeled through simple forward rates. We consider Markov jump risks by allowing randomness in jump sizes, independence between jump sizes and jump times. The Markov jump diffusion model is used to capture empirical phenomena and to accurately describe interest jump risks in a financial market. We derive the arbitrage-free model of simple forward rates under the spot measure. Moreover, the analytical pricing formulas for a cap and a floor are derived under the forward measure when the jump size follows a lognormal distribution. In our empirical analysis, we find that the LIBOR market model with Markov jump risk better accounts for changes from/to different states and different rates.

Keywords: arbitrage-free, cap and floor, Markov jump diffusion model, simple forward rate model, volatility smile, EM algorithm

Procedia PDF Downloads 407