Search results for: inverse square law
1594 Movement Optimization of Robotic Arm Movement Using Soft Computing
Authors: V. K. Banga
Abstract:
Robots are now playing a very promising role in industries. Robots are commonly used in applications in repeated operations or where operation by human is either risky or not feasible. In most of the industrial applications, robotic arm manipulators are widely used. Robotic arm manipulator with two link or three link structures is commonly used due to their low degrees-of-freedom (DOF) movement. As the DOF of robotic arm increased, complexity increases. Instrumentation involved with robotics plays very important role in order to interact with outer environment. In this work, optimal control for movement of various DOFs of robotic arm using various soft computing techniques has been presented. We have discussed about different robotic structures having various DOF robotics arm movement. Further stress is on kinematics of the arm structures i.e. forward kinematics and inverse kinematics. Trajectory planning of robotic arms using soft computing techniques is demonstrating the flexibility of this technique. The performance is optimized for all possible input values and results in optimized movement as resultant output. In conclusion, soft computing has been playing very important role for achieving optimized movement of robotic arm. It also requires very limited knowledge of the system to implement soft computing techniques.Keywords: artificial intelligence, kinematics, robotic arm, neural networks, fuzzy logic
Procedia PDF Downloads 2971593 Critical Parameters of a Square-Well Fluid
Authors: Hamza Javar Magnier, Leslie V. Woodcock
Abstract:
We report extensive molecular dynamics (MD) computational investigations into the thermodynamic description of supercritical properties for a model fluid that is the simplest realistic representation of atoms or molecules. The pair potential is a hard-sphere repulsion of diameter σ with a very short attraction of length λσ. When λ = 1.005 the range is so short that the model atoms are referred to as “adhesive spheres”. Molecular dimers, trimers …etc. up to large clusters, or droplets, of many adhesive-sphere atoms are unambiguously defined. This then defines percolation transitions at the molecular level that bound the existence of gas and liquid phases at supercritical temperatures, and which define the existence of a supercritical mesophase. Both liquid and gas phases are seen to terminate at the loci of percolation transitions, and below a second characteristic temperature (Tc2) are separated by the supercritical mesophase. An analysis of the distribution of clusters in gas, meso- and liquid phases confirms the colloidal nature of this mesophase. The general phase behaviour is compared with both experimental properties of the water-steam supercritical region and also with formally exact cluster theory of Mayer and Mayer. Both are found to be consistent with the present findings that in this system the supercritical mesophase narrows in density with increasing T > Tc and terminates at a higher Tc2 at a confluence of the primary percolation loci. The expended plot of the MD data points in the mesophase of 7 critical and supercritical isotherms in highlight this narrowing in density of the linear-slope region of the mesophase as temperature is increased above the critical. This linearity in the mesophase implies the existence of a linear combination rule between gas and liquid which is an extension of the Lever rule in the subcritical region, and can be used to obtain critical parameters without resorting to experimental data in the two-phase region. Using this combination rule, the calculated critical parameters Tc = 0.2007 and Pc = 0.0278 are found be agree with the values found by of Largo and coworkers. The properties of this supercritical mesophase are shown to be consistent with an alternative description of the phenomenon of critical opalescence seen in the supercritical region of both molecular and colloidal-protein supercritical fluids.Keywords: critical opalescence, supercritical, square-well, percolation transition, critical parameters.
Procedia PDF Downloads 5211592 Comparitive Analysis of Islamic and Conventional Banking Systems in Terms of Profitability: A Study on Emerging Market Economies
Authors: Alimshan Faizulayev, Eralp Bektas, Abdul Ghafar Ismail, Bezhan Rustamov
Abstract:
This paper performs empirical analysis on determinants of profitability in Islamic and Conventional Banks. The main focus of this study is to evaluate and measure of financial performance of Islamic banking firms operating in Egypt, Iran, Malaysia, Pakistan, Turkey, UAE in contrast to Conventional ones in those countries. To evaluate empirically performance of the banks, various financial ratios are employed. We measure performance in terms of liquidity, profitability, solvency, and efficiency. In this work, t-test, F-test, and OLS analysis are used to make hypothesis tests. Our findings reveal that there are similarities and differences in profitability determinants of Islamic and Conventional banking firms. The cost to revenue ratio has inverse relationship with profitability indicators in both banking systems. However, there are differences in financial performances between Conventional Banks and Islamic banks which are found in overall picture of all banks in terms of net income margin.Keywords: Islamic banking, conventional banking, GDP growth, emerging market economies
Procedia PDF Downloads 3981591 Fundamental Solutions for Discrete Dynamical Systems Involving the Fractional Laplacian
Authors: Jorge Gonzalez Camus, Valentin Keyantuo, Mahamadi Warma
Abstract:
In this work, we obtain representation results for solutions of a time-fractional differential equation involving the discrete fractional Laplace operator in terms of generalized Wright functions. Such equations arise in the modeling of many physical systems, for example, chain processes in chemistry and radioactivity. The focus is on the linear problem of the simplified Moore - Gibson - Thompson equation, where the discrete fractional Laplacian and the Caputo fractional derivate of order on (0,2] are involved. As a particular case, we obtain the explicit solution for the discrete heat equation and discrete wave equation. Furthermore, we show the explicit solution for the equation involving the perturbed Laplacian by the identity operator. The main tool for obtaining the explicit solution are the Laplace and discrete Fourier transforms, and Stirling's formula. The methodology mainly is to apply both transforms in the equation, to find the inverse of each transform, and to prove that this solution is well defined, using Stirling´s formula.Keywords: discrete fractional Laplacian, explicit representation of solutions, fractional heat and wave equations, fundamental
Procedia PDF Downloads 2091590 Synthesis and Characterization of Mixed ligand complexes of Bipyridyl and Glycine with Different Counter Anions as Functional Antioxidant Enzyme Mimics
Authors: Mohamed M. Ibrahim, Gaber A. M. Mersal, Salih Al-Juaid, Samir A. El-Shazly
Abstract:
A series of mixed ligand complexes, viz., [Cu(BPy)(Gly)X]Y {X = Cl (1), Y = 0; X = 0, Y = ClO4- (2); X = H2O, Y = NO3- (3); X = H2O, Y = CH3COO- (4); and [Cu(BPy)(Gly)-(H2O)]2(SO4) (5) have been synthesized. Their structures and properties were characterized by elemental analysis, thermal analaysis, IR, UV–vis, and ESR spectroscopy, as well as electrochemical measurements including cyclic voltammetry, electrical molar conductivity, and magnetic moment measurements. Complexes 1 and 2 formed slightly distorted square-pyramidal coordination geometries of CuN3OCl and CuN3O2, respectively in which the N,O-donor glycine and N,N-donor bipyridyl bind at the basal plane with chloride ion or water as the axial ligand. Complex 3 shows square planar CuN3O coordination geometry, which exhibits chemically significant hydrogen bonding interactions besides showing coordination polymer formation. The superoxide dismutase and catalase-like activities of all complexes were tested and were found to be promising candidates as durable electron-transfer catalyst being close to the efficiency of the mimicking enzymes displaying either catalase or tyrosinase activity to serve for complete reactive oxygen species (ROS) detoxification, both with respect to superoxide radicals and related peroxides. The DNA binding interaction with super coiled pGEM-T plasmid DNA was investigated by using spectral (absorption and emission) titration and electrochemical techniques. The results revealed that DNA intercalate with complexes 1 and 2 through the groove binding mode. The calculated intrinsic binding constant (Kb) of 1 and 2 were 4.71 and 2.429 × 105 M−1, respectively. Gel electrophoresis study reveals the fact that both complexes cleave super coiled pGEM-T plasmid DNA to nicked and linear forms in the absence of any additives. On the other hand, the interaction of both complexes with DNA, the quasi-reversible CuII/CuI redox couple slightly improves its reversibility with considerable decrease in current intensity. All the experimental results indicate that the bipyridyl mixed copper(II) complex (1) intercalate more effectively into the DNA base pairs.Keywords: enzyme mimics, mixed ligand complexes, X-ray structures, antioxidant, DNA-binding, DNA cleavage
Procedia PDF Downloads 5441589 The Role of Ecotourism Development in the Financing of Conservation Initiatives in Cameroon’s Protected Areas: Lessons from the Campo Ma’an National Park
Authors: Nyong Princely Awazi, Gadinga Walter Forje, Barnabas Neba Nfornkah, Ndzifon Jude Kimengsi
Abstract:
Ecotourism is documented as a sustainable measure of bridging conservation goals and livelihood sustenance around protected areas, due to its ability of not just providing alternative livelihood, but also in providing the necessary resources that can help finance conservation initiatives. In Cameroon, all ecotourism activities around national parks are aimed at generating revenue through the conservation service while providing sustainable livelihood options to the local population. There exists an information lacuna regarding the contribution of ecotourism finances to conservation efforts in the country. This study was aimed at establishing the contribution of ecotourism finances to conservation initiatives in and around the Campo Ma’an National Park (CMNP). Data were collected through the administering of 120 structured questionnaires to ecotourism actors and 15 key/expert interviews with tourism and conservation actors in the Campo Ma’an landscape. Chi-square test, Spearman’s rank correlation and regressions were used for data analysis. The study revealed that the main sources of ecotourism financing to the park service are through entrance fees, cameras and vehicle fees paid by tourists as well as ecotourism project financing through NGOs. Calculations from the tourism register of the park showed that the park was able to raise as much as 1,576,000 FCFA (US$ 3,152) annually. It was further established that ecotourism revenue has not greatly supported conservation, with 54% of respondents perceiving ecotourism not contributing to biodiversity conservation. Chi Square test results highlighted poor ecotourism governance, low level of ecotourism development, corruption from park management staff, obsolete nature of the current finance law on the management of protected area revenue as key factors hindering ecotourism financing in conservation. For ecotourism financing to contribute to biodiversity conservation in the CMNP and in Cameroon’s protected areas, the government needs to revise the finance law on the management of revenue generated from protected areas, improve park governance to fight corruption and enhance transparency, invest in the development and marketing of the Campo Ma’an national park as a tourism destination in the country.Keywords: Cameroon, Campo Ma’an National Park, conservation, ecotourism, ecotourism financing
Procedia PDF Downloads 1101588 Normalized Compression Distance Based Scene Alteration Analysis of a Video
Authors: Lakshay Kharbanda, Aabhas Chauhan
Abstract:
In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error
Procedia PDF Downloads 3401587 A Stokes Optimal Control Model of Determining Cellular Interaction Forces during Gastrulation
Authors: Yuanhao Gao, Ping Lin, Kees Weijer
Abstract:
An optimal control system model is proposed for the cell flow in the process of chick embryo gastrulation in this paper. The target is to determine the cellular interaction forces which are hard to measure. This paper will take an approach to investigate the forces with the idea of the inverse problem. By choosing the forces as the control variable and regarding the cell flow as Stokes fluid, an objective functional will be established to match the numerical result of cell velocity with the experimental data. So that the forces could be determined by minimizing the objective functional. The Lagrange multiplier method is utilized to derive the state and adjoint equations consisting the optimal control system, which specifies the first-order necessary conditions. Finite element method is used to discretize and approximate equations. A conjugate gradient algorithm is given for solving the minimum solution of the system and determine the forces.Keywords: optimal control model, Stokes equation, conjugate gradient method, finite element method, chick embryo gastrulation
Procedia PDF Downloads 2591586 Performance Evaluation of Refinement Method for Wideband Two-Beams Formation
Authors: C. Bunsanit
Abstract:
This paper presents the refinement method for two beams formation of wideband smart antenna. The refinement method for weighting coefficients is based on Fully Spatial Signal Processing by taking Inverse Discrete Fourier Transform (IDFT), and its simulation results are presented using MATLAB. The radiation pattern is created by multiplying the incoming signal with real weights and then summing them together. These real weighting coefficients are computed by IDFT method; however, the range of weight values is relatively wide. Therefore, for reducing this range, the refinement method is used. The radiation pattern concerns with five input parameters to control. These parameters are maximum weighting coefficient, wideband signal, direction of mainbeam, beamwidth, and maximum of minor lobe level. Comparison of the obtained simulation results between using refinement method and taking only IDFT shows that the refinement method works well for wideband two beams formation.Keywords: fully spatial signal processing, beam forming, refinement method, smart antenna, weighting coefficient, wideband
Procedia PDF Downloads 2261585 Modeling of Transformer Winding for Transients: Frequency-Dependent Proximity and Skin Analysis
Authors: Yazid Alkraimeen
Abstract:
Precise prediction of dielectric stresses and high voltages of power transformers require the accurate calculation of frequency-dependent parameters. A lack of accuracy can result in severe damages to transformer windings. Transient conditions is stuided by digital computers, which require the implementation of accurate models. This paper analyzes the computation of frequency-dependent skin and proximity losses included in the transformer winding model, using analytical equations and Finite Element Method (FEM). A modified formula to calculate the proximity and the skin losses is presented. The results of the frequency-dependent parameter calculations are verified using the Finite Element Method. The time-domain transient voltages are obtained using Numerical Inverse Laplace Transform. The results show that the classical formula for proximity losses is overestimating the transient voltages when compared with the results obtained from the modified method on a simple transformer geometry.Keywords: fast front transients, proximity losses, transformer winding modeling, skin losses
Procedia PDF Downloads 1391584 Modelling and Detecting the Demagnetization Fault in the Permanent Magnet Synchronous Machine Using the Current Signature Analysis
Authors: Yassa Nacera, Badji Abderrezak, Saidoune Abdelmalek, Houassine Hamza
Abstract:
Several kinds of faults can occur in a permanent magnet synchronous machine (PMSM) systems: bearing faults, electrically short/open faults, eccentricity faults, and demagnetization faults. Demagnetization fault means that the strengths of permanent magnets (PM) in PMSM decrease, and it causes low output torque, which is undesirable for EVs. The fault is caused by physical damage, high-temperature stress, inverse magnetic field, and aging. Motor current signature analysis (MCSA) is a conventional motor fault detection method based on the extraction of signal features from stator current. a simulation model of the PMSM under partial demagnetization and uniform demagnetization fault was established, and different degrees of demagnetization fault were simulated. The harmonic analyses using the Fast Fourier Transform (FFT) show that the fault diagnosis method based on the harmonic wave analysis is only suitable for partial demagnetization fault of the PMSM and does not apply to uniform demagnetization fault of the PMSM.Keywords: permanent magnet, diagnosis, demagnetization, modelling
Procedia PDF Downloads 681583 Product Features Extraction from Opinions According to Time
Authors: Kamal Amarouche, Houda Benbrahim, Ismail Kassou
Abstract:
Nowadays, e-commerce shopping websites have experienced noticeable growth. These websites have gained consumers’ trust. After purchasing a product, many consumers share comments where opinions are usually embedded about the given product. Research on the automatic management of opinions that gives suggestions to potential consumers and portrays an image of the product to manufactures has been growing recently. After launching the product in the market, the reviews generated around it do not usually contain helpful information or generic opinions about this product (e.g. telephone: great phone...); in the sense that the product is still in the launching phase in the market. Within time, the product becomes old. Therefore, consumers perceive the advantages/ disadvantages about each specific product feature. Therefore, they will generate comments that contain their sentiments about these features. In this paper, we present an unsupervised method to extract different product features hidden in the opinions which influence its purchase, and that combines Time Weighting (TW) which depends on the time opinions were expressed with Term Frequency-Inverse Document Frequency (TF-IDF). We conduct several experiments using two different datasets about cell phones and hotels. The results show the effectiveness of our automatic feature extraction, as well as its domain independent characteristic.Keywords: opinion mining, product feature extraction, sentiment analysis, SentiWordNet
Procedia PDF Downloads 4101582 Inversely Designed Chipless Radio Frequency Identification (RFID) Tags Using Deep Learning
Authors: Madhawa Basnayaka, Jouni Paltakari
Abstract:
Fully passive backscattering chipless RFID tags are an emerging wireless technology with low cost, higher reading distance, and fast automatic identification without human interference, unlike already available technologies like optical barcodes. The design optimization of chipless RFID tags is crucial as it requires replacing integrated chips found in conventional RFID tags with printed geometric designs. These designs enable data encoding and decoding through backscattered electromagnetic (EM) signatures. The applications of chipless RFID tags have been limited due to the constraints of data encoding capacity and the ability to design accurate yet efficient configurations. The traditional approach to accomplishing design parameters for a desired EM response involves iterative adjustment of design parameters and simulating until the desired EM spectrum is achieved. However, traditional numerical simulation methods encounter limitations in optimizing design parameters efficiently due to the speed and resource consumption. In this work, a deep learning neural network (DNN) is utilized to establish a correlation between the EM spectrum and the dimensional parameters of nested centric rings, specifically square and octagonal. The proposed bi-directional DNN has two simultaneously running neural networks, namely spectrum prediction and design parameters prediction. First, spectrum prediction DNN was trained to minimize mean square error (MSE). After the training process was completed, the spectrum prediction DNN was able to accurately predict the EM spectrum according to the input design parameters within a few seconds. Then, the trained spectrum prediction DNN was connected to the design parameters prediction DNN and trained two networks simultaneously. For the first time in chipless tag design, design parameters were predicted accurately after training bi-directional DNN for a desired EM spectrum. The model was evaluated using a randomly generated spectrum and the tag was manufactured using the predicted geometrical parameters. The manufactured tags were successfully tested in the laboratory. The amount of iterative computer simulations has been significantly decreased by this approach. Therefore, highly efficient but ultrafast bi-directional DNN models allow rapid and complicated chipless RFID tag designs.Keywords: artificial intelligence, chipless RFID, deep learning, machine learning
Procedia PDF Downloads 501581 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error
Authors: Qianhua He, Weili Zhou, Aiwu Chen
Abstract:
A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.Keywords: speech denoising, sparse representation, k-singular value decomposition, orthogonal matching pursuit
Procedia PDF Downloads 4991580 Improvment Efficiency of Fitness Clubs Operation
Authors: E. V. Kuzmicheva
Abstract:
An attention is concentrated on a service quality estimation of sport services. A typical mathematical model was developed at the base of the «general economic theory of mass service» accounting pedagogical requirements of fitness services. Also it took into account a dependence of the club member number versus on a value of square of sport facilities. Final recommendations were applied to the fitness club resulted in some improvement of the quality sport service, an increasing of the revenue from club members and profit of clubs.Keywords: fitness club, efficiency of operation, facilities, service quality, mass service
Procedia PDF Downloads 5091579 A Novel Antenna Design for Telemedicine Applications
Authors: Amar Partap Singh Pharwaha, Shweta Rani
Abstract:
To develop a reliable and cost effective communication platform for the telemedicine applications, novel antenna design has been presented using bacterial foraging optimization (BFO) technique. The proposed antenna geometry is achieved by etching a modified Koch curve fractal shape at the edges and a square shape slot at the center of the radiating element of a patch antenna. It has been found that the new antenna has achieved 43.79% size reduction and better resonating characteristic than the original patch. Representative results for both simulations and numerical validations are reported in order to assess the effectiveness of the developed methodology.Keywords: BFO, electrical permittivity, fractals, Koch curve
Procedia PDF Downloads 5061578 The Consumer's Behavior of Bakery Products in Bangkok
Authors: Jiraporn Weenuttranon
Abstract:
The objectives of the consumer behavior of bakery products in Bangkok are to study consumer behavior of the bakery product, to study the essential factors that could possibly affect the consumer behavior and to study recommendations for the development of the bakery products. This research is a survey research. Populations are buyer’s bakery products in Bangkok. The probability sample size is 400. The research uses a questionnaire for self-learning by using information technology. The researcher created a reliability value at 0.71 levels of significance. The data analysis will be done by using the percentage, mean, and standard deviation and testing the hypotheses by using chi-square.Keywords: consumer, behavior, bakery, standard deviation
Procedia PDF Downloads 4821577 Prompt Photons Production in Compton Scattering of Quark-Gluon and Annihilation of Quark-Antiquark Pair Processes
Authors: Mohsun Rasim Alizada, Azar Inshalla Ahmdov
Abstract:
Prompt photons are perhaps the most versatile tools for studying the dynamics of relativistic collisions of heavy ions. The study of photon radiation is of interest that in most hadron interactions, photons fly out as a background to other studied signals. The study of the birth of prompt photons in nucleon-nucleon collisions was previously carried out in experiments on Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). Due to the large energy of colliding nucleons, in addition to prompt photons, many different elementary particles are born. However, the birth of additional elementary particles makes it difficult to determine the accuracy of the effective section of the birth of prompt photons. From this point of view, the experiments planned on the Nuclotron-based Ion Collider Facility (NICA) complex will have a great advantage, since the energy obtained for colliding heavy ions will reduce the number of additionally born elementary particles. Of particular importance is the study of the processes of birth of prompt photons to determine the gluon leaving hadrons since the photon carries information about a rigid subprocess. At present, paper production of prompt photon in Compton scattering of quark-gluon and annihilation of quark–antiquark processes is investigated. The matrix elements Compton scattering of quark-gluon and annihilation of quark-antiquark pair processes has been written. The Square of matrix elements of processes has been calculated in FeynCalc. The phase volume of subprocesses has been determined. Expression to calculate the differential cross-section of subprocesses has been obtained: Given the resulting expressions for the square of the matrix element in the differential section expression, we see that the differential section depends not only on the energy of colliding protons, but also on the mass of quarks, etc. Differential cross-section of subprocesses is estimated. It is shown that the differential cross-section of subprocesses decreases with the increasing energy of colliding protons. Asymmetry coefficient with polarization of colliding protons is determined. The calculation showed that the squares of the matrix element of the Compton scattering process without and taking into account the polarization of colliding protons are identical. The asymmetry coefficient of this subprocess is zero, which is consistent with the literary data. It is known that in any single polarization processes with a photon, squares of matrix elements without taking into account and taking into account the polarization of the original particle must coincide, that is, the terms in the square of the matrix element with the degree of polarization are equal to zero. The coincidence of the squares of the matrix elements indicates that the parity of the system is preserved. The asymmetry coefficient of annihilation of quark–antiquark pair process linearly decreases from positive unit to negative unit with increasing the production of the polarization degrees of colliding protons. Thus, it was obtained that the differential cross-section of the subprocesses decreases with the increasing energy of colliding protons. The value of the asymmetry coefficient is maximal when the polarization of colliding protons is opposite and minimal when they are directed equally. Taking into account the polarization of only the initial quarks and gluons in Compton scattering does not contribute to the differential section of the subprocess.Keywords: annihilation of a quark-antiquark pair, coefficient of asymmetry, Compton scattering, effective cross-section
Procedia PDF Downloads 1491576 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory
Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan
Abstract:
Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.Keywords: data fusion, Dempster-Shafer theory, data mining, event detection
Procedia PDF Downloads 4101575 Partial Least Square Regression for High-Dimentional and High-Correlated Data
Authors: Mohammed Abdullah Alshahrani
Abstract:
The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data
Procedia PDF Downloads 491574 Enhancing Transfer Path Analysis with In-Situ Component Transfer Path Analysis for Interface Forces Identification
Authors: Raef Cherif, Houssine Bakkali, Wafaa El Khatiri, Yacine Yaddaden
Abstract:
The analysis of how vibrations are transmitted between components is required in many engineering applications. Transfer path analysis (TPA) has been a valuable engineering tool for solving Noise, Vibration, and Harshness (NVH problems using sub-structuring applications. The most challenging part of a TPA analysis is estimating the equivalent forces at the contact points between the active and the passive side. Component TPA in situ Method calculates these forces by inverting the frequency response functions (FRFs) measured at the passive subsystem, relating the motion at indicator points to forces at the interface. However, matrix inversion could pose problems due to the ill-conditioning of the matrices leading to inaccurate results. This paper establishes a TPA model for an academic system consisting of two plates linked by four springs. A numerical study has been performed to improve the interface forces identification. Several parameters are studied and discussed, such as the singular value rejection and the number and position of indicator points chosen and used in the inversion matrix.Keywords: transfer path analysis, matrix inverse method, indicator points, SVD decomposition
Procedia PDF Downloads 841573 A Self Organized Map Method to Classify Auditory-Color Synesthesia from Frontal Lobe Brain Blood Volume
Authors: Takashi Kaburagi, Takamasa Komura, Yosuke Kurihara
Abstract:
Absolute pitch is the ability to identify a musical note without a reference tone. Training for absolute pitch often occurs in preschool education. It is necessary to clarify how well the trainee can make use of synesthesia in order to evaluate the effect of the training. To the best of our knowledge, there are no existing methods for objectively confirming whether the subject is using synesthesia. Therefore, in this study, we present a method to distinguish the use of color-auditory synesthesia from the separate use of color and audition during absolute pitch training. This method measures blood volume in the prefrontal cortex using functional Near-infrared spectroscopy (fNIRS) and assumes that the cognitive step has two parts, a non-linear step and a linear step. For the linear step, we assume a second order ordinary differential equation. For the non-linear part, it is extremely difficult, if not impossible, to create an inverse filter of such a complex system as the brain. Therefore, we apply a method based on a self-organizing map (SOM) and are guided by the available data. The presented method was tested using 15 subjects, and the estimation accuracy is reported.Keywords: absolute pitch, functional near-infrared spectroscopy, prefrontal cortex, synesthesia
Procedia PDF Downloads 2631572 Calibration and Validation of the Aquacrop Model for Simulating Growth and Yield of Rain-fed Sesame (Sesamum indicum L.) Under Different Soil Fertility Levels in the Semi-arid Areas of Tigray
Authors: Abadi Berhane, Walelign Worku, Berhanu Abrha, Gebre Hadgu, Tigray
Abstract:
Sesame is an important oilseed crop in Ethiopia; which is the second most exported agricultural commodity next to coffee. However, there is poor soil fertility management and a research-led farming system for the crop. The AquaCrop model was applied as a decision-support tool; which performs a semi-quantitative approach to simulate the yield of crops under different soil fertility levels. The objective of this experiment was to calibrate and validated the AquaCrop model for simulating the growth and yield of sesame under different nitrogen fertilizer levels and to test the performance of the model as a decision-support tool for improved sesame cultivation in the study area. The experiment was laid out as a randomized complete block design (RCBD) in a factorial arrangement in the 2016, 2017, and 2018 main cropping seasons. In this experiment, four nitrogen fertilizer rates; 0, 23, 46, and 69 Kg/ha nitrogen, and three improved varieties (Setit-1, Setit-2, and Humera-1). In the meantime, growth, yield, and yield components of sesame were collected from each treatment. Coefficient of determination (R2), Root mean square error (RMSE), Normalized root mean square error (N-RMSE), Model efficiency (E), and Degree of agreement (D) were used to test the performance of the model. The results indicated that the AquaCrop model successfully simulated soil water content with R2 varying from 0.92 to 0.98, RMSE 6.5 to 13.9 mm, E 0.78 to 0.94, and D 0.95 to 0.99; and the corresponding values for AB also varied from 0.92 to 0.98, 0.33 to 0.54 tons/ha, 0.74 to 0.93, and 0.9 to 0.98, respectively. The results on the canopy cover of sesame also showed that the model acceptably simulated canopy cover with R2 varying from 0.95 to 0.99, and a RMSE of 5.3 to 8.6%. The AquaCrop model was appropriately calibrated to simulate soil water content, canopy cover, aboveground biomass, and sesame yield; the results indicated that the model adequately simulated the growth and yield of sesame under the different nitrogen fertilizer levels. The AquaCrop model might be an important tool for improved soil fertility management and yield enhancement strategies of sesame. Hence, the model might be applied as a decision-support tool in soil fertility management in sesame production.Keywords: aquacrop model, sesame, normalized water productivity, nitrogen fertilizer
Procedia PDF Downloads 751571 Perceived Social Support, Resilience and Relapse Risk in Recovered Addicts
Authors: Islah Ud Din, Amna Bibi
Abstract:
The current study was carried out to examine the perceived social support, resilience and relapse risk in recovered addicts. A purposive sampling technique was used to collect data from recovered addicts. A multidimensional scale of perceived social support by was used to measure the perceived social support. The brief Resilience Scale (BRS) was used to assess resilience. The Stimulant Relapse Risk Scale (SRRS) was used to examine the relapse risk. Resilience and Perceived social support have substantial positive correlations, whereas relapse risk and perceived social support have significant negative associations. Relapse risk and resilience have a strong inverse connection. Regression analysis was used to check the mediating effect of resilience between perceived social support and relapse risk. The findings revealed that perceived social support negatively predicted relapse risk. Results showed that Resilience plays a role as partial mediation between perceived social support and relapse risk. This Research will allow us to explore and understand the relapse risk factor and the role of perceived social support and resilience in recovered addicts. The study's findings have immediate consequences in the prevention of relapse. The study will play a significant part in drug rehabilitation centers, clinical settings and further research.Keywords: perceived social support, resilience, relapse risk, recovered addicts, drugs addiction
Procedia PDF Downloads 351570 Labyrinth Fractal on a Convex Quadrilateral
Authors: Harsha Gopalakrishnan, Srijanani Anurag Prasad
Abstract:
Quadrilateral labyrinth fractals are a new type of fractals that are introduced in this paper. They belong to a unique class of fractals on any plane quadrilateral. The previously researched labyrinth fractals on the unit square and triangle inspire this form of fractal. This work describes how to construct a quadrilateral labyrinth fractal and looks at the circumstances in which it can be understood as the attractor of an iterated function system. Furthermore, some of its topological properties and the Hausdorff and box-counting dimensions of the quadrilateral labyrinth fractals are studied.Keywords: fractals, labyrinth fractals, dendrites, iterated function system, Haus-Dorff dimension, box-counting dimension, non-self similar, non-self affine, connected, path connected
Procedia PDF Downloads 761569 Calibration and Validation of the Aquacrop Model for Simulating Growth and Yield of Rain-Fed Sesame (Sesamum Indicum L.) Under Different Soil Fertility Levels in the Semi-arid Areas of Tigray, Ethiopia
Authors: Abadi Berhane, Walelign Worku, Berhanu Abrha, Gebre Hadgu
Abstract:
Sesame is an important oilseed crop in Ethiopia, which is the second most exported agricultural commodity next to coffee. However, there is poor soil fertility management and a research-led farming system for the crop. The AquaCrop model was applied as a decision-support tool, which performs a semi-quantitative approach to simulate the yield of crops under different soil fertility levels. The objective of this experiment was to calibrate and validate the AquaCrop model for simulating the growth and yield of sesame under different nitrogen fertilizer levels and to test the performance of the model as a decision-support tool for improved sesame cultivation in the study area. The experiment was laid out as a randomized complete block design (RCBD) in a factorial arrangement in the 2016, 2017, and 2018 main cropping seasons. In this experiment, four nitrogen fertilizer rates, 0, 23, 46, and 69 Kg/ha nitrogen, and three improved varieties (Setit-1, Setit-2, and Humera-1). In the meantime, growth, yield, and yield components of sesame were collected from each treatment. Coefficient of determination (R2), Root mean square error (RMSE), Normalized root mean square error (N-RMSE), Model efficiency (E), and Degree of agreement (D) were used to test the performance of the model. The results indicated that the AquaCrop model successfully simulated soil water content with R2 varying from 0.92 to 0.98, RMSE 6.5 to 13.9 mm, E 0.78 to 0.94, and D 0.95 to 0.99, and the corresponding values for AB also varied from 0.92 to 0.98, 0.33 to 0.54 tons/ha, 0.74 to 0.93, and 0.9 to 0.98, respectively. The results on the canopy cover of sesame also showed that the model acceptably simulated canopy cover with R2 varying from 0.95 to 0.99 and a RMSE of 5.3 to 8.6%. The AquaCrop model was appropriately calibrated to simulate soil water content, canopy cover, aboveground biomass, and sesame yield; the results indicated that the model adequately simulated the growth and yield of sesame under the different nitrogen fertilizer levels. The AquaCrop model might be an important tool for improved soil fertility management and yield enhancement strategies of sesame. Hence, the model might be applied as a decision-support tool in soil fertility management in sesame production.Keywords: aquacrop model, normalized water productivity, nitrogen fertilizer, canopy cover, sesame
Procedia PDF Downloads 791568 Efficient Semi-Systolic Finite Field Multiplier Using Redundant Basis
Authors: Hyun-Ho Lee, Kee-Won Kim
Abstract:
The arithmetic operations over GF(2m) have been extensively used in error correcting codes and public-key cryptography schemes. Finite field arithmetic includes addition, multiplication, division and inversion operations. Addition is very simple and can be implemented with an extremely simple circuit. The other operations are much more complex. The multiplication is the most important for cryptosystems, such as the elliptic curve cryptosystem, since computing exponentiation, division, and computing multiplicative inverse can be performed by computing multiplication iteratively. In this paper, we present a parallel computation algorithm that operates Montgomery multiplication over finite field using redundant basis. Also, based on the multiplication algorithm, we present an efficient semi-systolic multiplier over finite field. The multiplier has less space and time complexities compared to related multipliers. As compared to the corresponding existing structures, the multiplier saves at least 5% area, 50% time, and 53% area-time (AT) complexity. Accordingly, it is well suited for VLSI implementation and can be easily applied as a basic component for computing complex operations over finite field, such as inversion and division operation.Keywords: finite field, Montgomery multiplication, systolic array, cryptography
Procedia PDF Downloads 2941567 The Univalence Principle: Equivalent Mathematical Structures Are Indistinguishable
Authors: Michael Shulman, Paige North, Benedikt Ahrens, Dmitris Tsementzis
Abstract:
The Univalence Principle is the statement that equivalent mathematical structures are indistinguishable. We prove a general version of this principle that applies to all set-based, categorical, and higher-categorical structures defined in a non-algebraic and space-based style, as well as models of higher-order theories such as topological spaces. In particular, we formulate a general definition of indiscernibility for objects of any such structure, and a corresponding univalence condition that generalizes Rezk’s completeness condition for Segal spaces and ensures that all equivalences of structures are levelwise equivalences. Our work builds on Makkai’s First-Order Logic with Dependent Sorts, but is expressed in Voevodsky’s Univalent Foundations (UF), extending previous work on the Structure Identity Principle and univalent categories in UF. This enables indistinguishability to be expressed simply as identification, and yields a formal theory that is interpretable in classical homotopy theory, but also in other higher topos models. It follows that Univalent Foundations is a fully equivalence-invariant foundation for higher-categorical mathematics, as intended by Voevodsky.Keywords: category theory, higher structures, inverse category, univalence
Procedia PDF Downloads 1511566 Lifetime Assessment of Highly Efficient Metal-Based Air-Diffuser through Accelerated Degradation Test
Authors: Jinyoung Choi, Tae-Ho Yoon, Sunmook Lee
Abstract:
Degradation of standard oxygen transfer efficiency (SOTE) with time was observed for the assessment of lifetime of metal-based air-diffuser, which displaced a polymer composite-based air-diffuser in order to attain a longer lifetime in the actual field. The degradation of air-diffuser occurred due to the failure of the formation of small and uniform air bubbles since the patterns formed on the disc of air-diffuser deteriorated and/or changed from their initial shapes while they were continuously exposed to the air blowing condition during the operation in the field. Therefore, the lifetime assessment of metal-based air-diffuser was carried out through an accelerated degradation test by accelerating the air-blowing conditions in 200 L/min, 300 L/min, and 400 L/min and the lifetime of normal operating condition at 120 L/min was predicted. It was found that Weibull distribution was the most proper one for describing the lifetime distribution of metal-based air-diffuser in the present study. The shape and scale parameters indicated that the accelerated blowing conditions were all within the acceleration domain. The lifetime was predicted by adopting inverse power model for a stress-life relationship and estimated to be B10=94,004 hrs with CL=95%. Acknowledgement: This work was financially supported by the Ministry of Trade, Industry and Energy (Grant number: N0001475).Keywords: accelerated degradation test, air-diffuser, lifetime assessment, SOTE
Procedia PDF Downloads 5621565 Observation on the Performance of Heritage Structures in Kathmandu Valley, Nepal during the 2015 Gorkha Earthquake
Authors: K. C. Apil, Keshab Sharma, Bigul Pokharel
Abstract:
Kathmandu Valley, capital city of Nepal houses numerous historical monuments as well as religious structures which are as old as from the 4th century A.D. The city alone is home to seven UNESCO’s world heritage sites including various public squares and religious sanctums which are often regarded as living heritages by various historians and archeological explorers. Recently on April 25, 2015, the capital city including other nearby locations was struck with Gorkha earthquake of moment magnitude (Mw) 7.8, followed by the strongest aftershock of moment magnitude (Mw) 7.3 on May 12. This study reports structural failures and collapse of heritage structures in Kathmandu Valley during the earthquake and presents preliminary findings as to the causes of failures and collapses. Field reconnaissance was carried immediately after the main shock and the aftershock, in major heritage sites: UNESCO world heritage sites, a number of temples and historic buildings in Kathmandu Durbar Square, Patan Durbar Square, and Bhaktapur Durbar Square. Despite such catastrophe, a significant number of heritage structures stood high, performing very well during the earthquake. Preliminary reports from archeological department suggest that 721 of such structures were severely affected, whereas numbers within the valley only were 444 including 76 structures which were completely collapsed. This study presents recorded accelerograms and geology of Kathmandu Valley. Structural typology and architecture of the heritage structures in Kathmandu Valley are briefly described. Case histories of damaged heritage structures, the patterns, and the failure mechanisms are also discussed in this paper. It was observed that performance of heritage structures was influenced by the multiple factors such as structural and architecture typology, configuration, and structural deficiency, local ground site effects and ground motion characteristics, age and maintenance level, material quality etc. Most of such heritage structures are of masonry type using bricks and earth-mortar as a bonding agent. The walls' resistance is mainly compressive, thus capable of withstanding vertical static gravitational load but not horizontal dynamic seismic load. There was no definitive pattern of damage to heritage structures as most of them behaved as a composite structure. Some structures were extensively damaged in some locations, while structures with similar configuration at nearby location had little or no damage. Out of major heritage structures, Dome, Pagoda (2, 3 or 5 tiered temples) and Shikhara structures were studied with similar variables. Studying varying degrees of damages in such structures, it was found that Shikhara structures were most vulnerable one where Dome structures were found to be the most stable one, followed by Pagoda structures. The seismic performance of the masonry-timber and stone masonry structures were slightly better than that of the masonry structures. Regular maintenance and periodic seismic retrofitting seems to have played pivotal role in strengthening seismic performance of the structure. The study also recommends some key functions to strengthen the seismic performance of such structures through study based on structural analysis, building material behavior and retrofitting details. The result also recognises the importance of documentation of traditional knowledge and its revised transformation in modern technology.Keywords: Gorkha earthquake, field observation, heritage structure, seismic performance, masonry building
Procedia PDF Downloads 151