Search results for: universal method
16402 Concentration of D-Pinitol from Carob Kibble Using Submerged Fermentation by Saccharomyces cerevisiae
Authors: Thi Huong Vu, Vijay Jayasena, Zhongxiang Fang, Gary Dykes
Abstract:
D-pinitol (3-O-methyl ether of D-chiro-inosito) has been known to have health benefits for diabetic patients. Carob kibble has received attention due to the presence of high value D-pinitol and polyphenol antioxidants. D-pinitol was concentrated from carob kibble using submerged fermentation with Saccharomyces cerevisiae. Total carbohydrates and D-pinitol were determined by the phenol-sulphuric acid method and HPLC, respectively. The content of D-pinitol increased from approximately 43 to 70 mg/g dry weight after fermentation. The yeast consumed over 70% of total carbohydrates in carob kibble without any negative effect on D-pinitol content. A range of substrate medium pH’s from 5.0 – 7.0 had no significant effect on the removal of carbohydrates and D-pinitol. This method may provide a practical solution for production of D-pinitol from carob in a cost effective manner.Keywords: carob kibble, d-pinitol, saccharomyces cerevisiae, submerged fermentation, total carbohydrates
Procedia PDF Downloads 32216401 Studies on Separation of Scandium from Sulfate Environment Using Ion Exchange Technique
Authors: H. Hajmohammadi , A. H. Jafari, M. Eskandari Nasab
Abstract:
The ion exchange method was used to assess the absorption of sulfate media from laboratory-grade materials. The Taguchi method was employed for determining the optimum conditions for scandium adsorption. Results show that optimum conditions for scandium adsorption from sulfate were obtained by Purolite C100 cationic resin in 0.1 g/l sulfuric acid and scandium concentration of 2 g/l at 25 °C. Studies also showed that lowering H₂SO₄ concentration and aqueous phase temperature leads to an increase in Sc adsorption. Visual Minteq software was used to ascertain the various possible cation types and the effect of concentration of scandium ion species on scandium adsorption by cationic resins. The simulation results of the above software show that scandium ion species are often cationic species that are consistent with experimental data.Keywords: scandium, ion exchange resin, simulation, leach copper
Procedia PDF Downloads 14216400 Spectroscopic Characterization of Indium-Tin Laser Ablated Plasma
Authors: Muhammad Hanif, Muhammad Salik
Abstract:
In the present research work we present the optical emission studies of the Indium (In)-Tin (Sn) plasma produced by the first (1064 nm) harmonic of an Nd: YAG nanosecond pulsed laser. The experimentally observed line profiles of neutral Indium (InI) and Tin (SnI) are used to extract the electron temperature (Te) using the Boltzmann plot method. Whereas, the electron number density (Ne) has been determined from the Stark broadening line profile method. The Te is calculated by varying the distance from the target surface along the line of propagation of plasma plume and also by varying the laser irradiance. Beside we have studied the variation of Ne as a function of laser irradiance as well as its variation with distance from the target surface.Keywords: indium-tin plasma, laser ablation, optical emission spectroscopy, electron temperature, electron number density
Procedia PDF Downloads 52916399 Effect of Design Parameters on Porpoising Instability of a High Speed Planing Craft
Authors: Lokeswara Rao P., Naga Venkata Rakesh N., V. Anantha Subramanian
Abstract:
It is important to estimate, predict, and avoid the dynamic instability of high speed planing crafts. It is known that design parameters like relative location of center of gravity with respect to the dynamic lift centre and length to beam ratio of the craft have influence on the tendency to porpoise. This paper analyzes the hydrodynamic performance on the basis of the semi-empirical Savitsky method and also estimates the same by numerical simulations based on Reynolds Averaged Navier Stokes (RANS) equations using a commercial code namely, STAR- CCM+. The paper examines through the same numerical simulation considering dynamic equilibrium, the changing running trim, which results in porpoising. Some interesting results emerge from the study and this leads to early detection of the instability.Keywords: CFD, planing hull, porpoising, Savitsky method
Procedia PDF Downloads 18016398 Structural-Geotechnical Effects of the Foundation of a Medium-Height Structure
Authors: Valentina Rodas, Luis Almache
Abstract:
The interaction effects between the existing soil and the substructure of a 5-story building with an underground one were evaluated in such a way that the structural-geotechnical concepts were validated through the method of impedance factors with a program based on the method of the finite elements. The continuous wall-type foundation had a constant thickness and followed inclined and orthogonal directions, while the ground had homogeneous and medium-type characteristics. The soil considered was type C according to the Ecuadorian Construction Standard (NEC) and the corresponding foundation comprised a depth of 4.00 meters and a basement wall thickness of 40 centimeters. This project is part of a mid-rise building in the city of Azogues (Ecuador). The hypotheses raised responded to the objectives in such a way that the model implemented with springs had a variation with respect to the embedded base, obtaining conservative results.Keywords: interaction, soil, substructure, springs, effects, modeling , embedment
Procedia PDF Downloads 23016397 On Confidence Intervals for the Difference between Inverse of Normal Means with Known Coefficients of Variation
Authors: Arunee Wongkhao, Suparat Niwitpong, Sa-aat Niwitpong
Abstract:
In this paper, we propose two new confidence intervals for the difference between the inverse of normal means with known coefficients of variation. One of these two confidence intervals for this problem is constructed based on the generalized confidence interval and the other confidence interval is constructed based on the closed form method of variance estimation. We examine the performance of these confidence intervals in terms of coverage probabilities and expected lengths via Monte Carlo simulation.Keywords: coverage probability, expected length, inverse of normal mean, coefficient of variation, generalized confidence interval, closed form method of variance estimation
Procedia PDF Downloads 30916396 [Keynote Talk]: Analysis of One Dimensional Advection Diffusion Model Using Finite Difference Method
Authors: Vijay Kumar Kukreja, Ravneet Kaur
Abstract:
In this paper, one dimensional advection diffusion model is analyzed using finite difference method based on Crank-Nicolson scheme. A practical problem of filter cake washing of chemical engineering is analyzed. The model is converted into dimensionless form. For the grid Ω × ω = [0, 1] × [0, T], the Crank-Nicolson spatial derivative scheme is used in space domain and forward difference scheme is used in time domain. The scheme is found to be unconditionally convergent, stable, first order accurate in time and second order accurate in space domain. For a test problem, numerical results are compared with the analytical ones for different values of parameter.Keywords: Crank-Nicolson scheme, Lax-Richtmyer theorem, stability, consistency, Peclet number, Greschgorin circle
Procedia PDF Downloads 22316395 Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis
Authors: Yao Cheng, Weihua Zhang
Abstract:
Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability.Keywords: bearing, complete ensemble empirical mode decomposition with adaptive noise, fault diagnosis, multipoint optimal minimum entropy deconvolution
Procedia PDF Downloads 37416394 The Wellness Wheel: A Tool to Reimagine Schooling
Authors: Jennifer F. Moore
Abstract:
The wellness wheel as a tool for school growth and change is currently being piloted by a startup school in Chicago, IL. In this case study, members of the school community engaged in the appreciative inquiry process to plan their organizational development around the wellness wheel. The wellness wheel (comprised of physical, emotional, social, spiritual, environmental, cognitive, and financial wellness) is used as a planning tool by teachers, students, parents, and administrators. Through the appreciative inquiry method of change, the community is reflecting on their individual level of wellness and developing organizational structures to ensure the well being of children and adults. The goal of the case study is to test the appropriateness of the use of appreciative inquiry (as a method) and the wellness wheel (as a tool) for school growth and development. Findings of the case study will be realized by the conference. The research is in process now.Keywords: education, schools, well being, wellness
Procedia PDF Downloads 17816393 The Complete Modal Derivatives
Authors: Sebastian Andersen, Peter N. Poulsen
Abstract:
The use of basis projection in the structural dynamic analysis is frequently applied. The purpose of the method is to improve the computational efficiency, while maintaining a high solution accuracy, by projection the governing equations onto a small set of carefully selected basis vectors. The present work considers basis projection in kinematic nonlinear systems with a focus on two widely used basis vectors; the system mode shapes and their modal derivatives. Particularly the latter basis vectors are given special attention since only approximate modal derivatives have been used until now. In the present work the complete modal derivatives, derived from perturbation methods, are presented and compared to the previously applied approximate modal derivatives. The correctness of the complete modal derivatives is illustrated by use of an example of a harmonically loaded kinematic nonlinear structure modeled by beam elements.Keywords: basis projection, finite element method, kinematic nonlinearities, modal derivatives
Procedia PDF Downloads 23716392 Stress Variation of Underground Building Structure during Top-Down Construction
Authors: Soo-yeon Seo, Seol-ki Kim, Su-jin Jung
Abstract:
In the construction of a building, it is necessary to minimize construction period and secure enough work space for stacking of materials during the construction especially in city area. In this manner, various top-down construction methods have been developed and widely used in Korea. This paper investigates the stress variation of underground structure of a building constructed by using SPS (Strut as Permanent System) known as a top-down method in Korea through an analytical approach. Various types of earth pressure distribution related to ground condition were considered in the structural analysis of an example structure at each step of the excavation. From the analysis, the most high member force acting on beams was found when the ground type was medium sandy soil and a stress concentration was found in corner area.Keywords: construction of building, top-down construction method, earth pressure distribution, member force, stress concentration
Procedia PDF Downloads 30616391 Method of Cluster Based Cross-Domain Knowledge Acquisition for Biologically Inspired Design
Authors: Shen Jian, Hu Jie, Ma Jin, Peng Ying Hong, Fang Yi, Liu Wen Hai
Abstract:
Biologically inspired design inspires inventions and new technologies in the field of engineering by mimicking functions, principles, and structures in the biological domain. To deal with the obstacles of cross-domain knowledge acquisition in the existing biologically inspired design process, functional semantic clustering based on functional feature semantic correlation and environmental constraint clustering composition based on environmental characteristic constraining adaptability are proposed. A knowledge cell clustering algorithm and the corresponding prototype system is developed. Finally, the effectiveness of the method is verified by the visual prosthetic device design.Keywords: knowledge clustering, knowledge acquisition, knowledge based engineering, knowledge cell, biologically inspired design
Procedia PDF Downloads 42616390 Farmers’ Use of Indigenous Knowledge System (IKS) for Selected Arable Crops Production in Ondo State
Authors: A. M. Omoare, E. O. Fakoya
Abstract:
This study sought to determine the use of indigenous knowledge for selected arable crops production in Ondo Sate. A multistage sampling method was used and 112 arable crops farmers were systematically selected. Data were analyzed using both descriptive and inferential statistics. The results showed that majority of the sampled farmers were male (75.90%) About 75% were married with children. Large proportion of them (62.61%) were within the ages of 30-49 years. Most of them have spent about 10 years in farming (58.92%). The highest raw scores of use of indigenous knowledge were found in planting on mound in yam production, use of native medicine and scare-crow method in controlling birds in rice production, timely planting of locally developed resistant varieties in cassava production and soaking of maize seeds in water to determine their viability with raw scores of 313, 310, 305, 303, and 300 respectively, while the lowest raw scores was obtained in use of bell method in controlling birds in rice production with raw scores of 210. The findings established that proverbs (59.8%) and taboos (55.36%) were the most commonly used media in transmitting indigenous knowledge by arable crop farmers. The multiple regression analysis result revealed that age of the farmers and farming experience had a significant relationship with the use of indigenous knowledge of the farmers which gave R2=0.83 for semi-log function form of equation which is the land equation. The policy implication is that indigenous knowledge should provide a basis for designing modern technologies to enhance sustainable agricultural development.Keywords: Arable Crop Production, extent of use, indigenous knowledge, farming experience
Procedia PDF Downloads 57116389 Automatic Identification of Pectoral Muscle
Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina
Abstract:
Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle
Procedia PDF Downloads 35016388 Earnings vs Cash Flows: The Valuation Perspective
Authors: Megha Agarwal
Abstract:
The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)
Procedia PDF Downloads 37616387 Improvement of Soft Clay Soil with Biopolymer
Authors: Majid Bagherinia
Abstract:
Lime and cement are frequently used as binders in the Deep Mixing Method (DMM) to improve soft clay soils. The most significant disadvantages of these materials are carbon dioxide emissions and the consumption of natural resources. In this study, three different biopolymers, guar gum, locust bean gum, and sodium alginate, were investigated for the improvement of soft clay using DMM. In the experimental study, the effects of the additive ratio and curing time on the Unconfined Compressive Strength (UCS) of stabilized specimens were investigated. According to the results, the UCS values of the specimens increased as the additive ratio and curing time increased. The most effective additive was sodium alginate, and the highest strength was obtained after 28 days.Keywords: deep mixing method, soft clays, ground improvement, biopolymers, unconfined compressive strength
Procedia PDF Downloads 8016386 Degradation of Neonicotinoid Insecticides (Acetamiprid and Imidacloprid) Using Biochar of Rice Husk and Fruit Peels
Authors: Mateen Abbas, Abdul Muqeet Khan, Sadia Bashir, Muhammad Awais Khalid, Aamir Ghafoor, Zara Hussain, Mashal Shahid
Abstract:
The irrational use of insecticides in everyday life has drawn attention worldwide towards its harmful effects. To mitigate the toxic effects of insecticides to humans, present study was planned on the degradation/detoxification of the neonicotinoid insecticides including imidacloprid and acetamiprid. Biocarbon of fruit peels (Banana & Watermelon) and biochar (activated or non-activated) of rice husk was utilized as adsorbents for degradation of selected pesticides. Both activated and non-activated biochar were prepared for treatment and then applied in different concentrations (0.5 to 2.0 ppm) and dosage (1.0 to 2.5g) to insecticides (Acetamiprid & Imidacloprid) as well as studied at different times (30-120 minutes). Reverse Phase-High Performance Liquid Chromatography (RP-HPLC) coupled with Photodiode array detector was used to quantify the insecticides. Results depicted that activated biochar of rice husk minimized the 73% concentrations of both insecticides however, watermelon activated biocarbon degraded 72% of imidacloprid and 56% of acetamiprid. Results proved the efficiency of the method employed and it was also inferred that high concentration of biocarbon resulted in larger percentage of degradation. The applied method is cheaper, easy and accessible that can be used to minimize the pesticide residues in animal feed. Degradation using biochar proved significant degradation, eco-friendly and economic method to reduce toxicity of insecticides.Keywords: insecticides, acetamiprid, imidacloprid, biochar, HPLC
Procedia PDF Downloads 15316385 Effects of Surface Roughness on a Unimorph Piezoelectric Micro-Electro-Mechanical Systems Vibrational Energy Harvester Using Finite Element Method Modeling
Authors: Jean Marriz M. Manzano, Marc D. Rosales, Magdaleno R. Vasquez Jr., Maria Theresa G. De Leon
Abstract:
This paper discusses the effects of surface roughness on a cantilever beam vibrational energy harvester. A silicon sample was fabricated using MEMS fabrication processes. When etching silicon using deep reactive ion etching (DRIE) at large etch depths, rougher surfaces are observed as a result of increased response in process pressure, amount of coil power and increased helium backside cooling readings. To account for the effects of surface roughness on the characteristics of the cantilever beam, finite element method (FEM) modeling was performed using actual roughness data from fabricated samples. It was found that when etching about 550um of silicon, root mean square roughness parameter, Sq, varies by 1 to 3 um (at 100um thick) across a 6-inch wafer. Given this Sq variation, FEM simulations predict an 8 to148 Hz shift in the resonant frequency while having no significant effect on the output power. The significant shift in the resonant frequency implies that careful consideration of surface roughness from fabrication processes must be done when designing energy harvesters.Keywords: deep reactive ion etching, finite element method, microelectromechanical systems, multiphysics analysis, surface roughness, vibrational energy harvester
Procedia PDF Downloads 12116384 Structural and Optical Characterization of Silica@PbS Core–Shell Nanoparticles
Authors: A. Pourahmad, Sh. Gharipour
Abstract:
The present work describes the preparation and characterization of nanosized SiO2@PbS core-shell particles by using a simple wet chemical route. This method utilizes silica spheres formation followed by successive ionic layer adsorption and reaction method assisted lead sulphide shell layer formation. The final product was characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), UV–vis spectroscopic, infrared spectroscopy (IR) and transmission electron microscopy (TEM) experiments. The morphological studies revealed the uniformity in size distribution with core size of 250 nm and shell thickness of 18 nm. The electron microscopic images also indicate the irregular morphology of lead sulphide shell layer. The structural studies indicate the face-centered cubic system of PbS shell with no other trace for impurities in the crystal structure.Keywords: core-shell, nanostructure, semiconductor, optical property, XRD
Procedia PDF Downloads 29916383 Theoretical Analysis of Photoassisted Field Emission near the Metal Surface Using Transfer Hamiltonian Method
Authors: Rosangliana Chawngthu, Ramkumar K. Thapa
Abstract:
A model calculation of photoassisted field emission current (PFEC) by using transfer Hamiltonian method will be present here. When the photon energy is incident on the surface of the metals, such that the energy of a photon is usually less than the work function of the metal under investigation. The incident radiation photo excites the electrons to a final state which lies below the vacuum level; the electrons are confined within the metal surface. A strong static electric field is then applied to the surface of the metal which causes the photoexcited electrons to tunnel through the surface potential barrier into the vacuum region and constitutes the considerable current called photoassisted field emission current. The incident radiation is usually a laser beam, causes the transition of electrons from the initial state to the final state and the matrix element for this transition will be written. For the calculation of PFEC, transfer Hamiltonian method is used. The initial state wavefunction is calculated by using Kronig-Penney potential model. The effect of the matrix element will also be studied. An appropriate dielectric model for the surface region of the metal will be used for the evaluation of vector potential. FORTRAN programme is used for the calculation of PFEC. The results will be checked with experimental data and the theoretical results.Keywords: photoassisted field emission, transfer Hamiltonian, vector potential, wavefunction
Procedia PDF Downloads 22616382 Nonlinear Adaptive PID Control for a Semi-Batch Reactor Based on an RBF Network
Authors: Magdi. M. Nabi, Ding-Li Yu
Abstract:
Control of a semi-batch polymerization reactor using an adaptive radial basis function (RBF) neural network method is investigated in this paper. A neural network inverse model is used to estimate the valve position of the reactor; this method can identify the controlled system with the RBF neural network identifier. The weights of the adaptive PID controller are timely adjusted based on the identification of the plant and self-learning capability of RBFNN. A PID controller is used in the feedback control to regulate the actual temperature by compensating the neural network inverse model output. Simulation results show that the proposed control has strong adaptability, robustness and satisfactory control performance and the nonlinear system is achieved.Keywords: Chylla-Haase polymerization reactor, RBF neural networks, feed-forward, feedback control
Procedia PDF Downloads 70216381 Validation of a Placebo Method with Potential for Blinding in Ultrasound-Guided Dry Needling
Authors: Johnson C. Y. Pang, Bo Peng, Kara K. L. Reeves, Allan C. L. Fud
Abstract:
Objective: Dry needling (DN) has long been used as a treatment method for various musculoskeletal pain conditions. However, the evidence level of the studies was low due to the limitations of the methodology. Lack of randomization and inappropriate blinding is potentially the main sources of bias. A method that can differentiate clinical results due to the targeted experimental procedure from its placebo effect is needed to enhance the validity of the trial. Therefore, this study aimed to validate the method as a placebo ultrasound(US)-guided DN for patients with knee osteoarthritis (KOA). Design: This is a randomized controlled trial (RCT). Ninety subjects (25 males and 65 females) aged between 51 and 80 (61.26 ± 5.57) with radiological KOA were recruited and randomly assigned into three groups with a computer program. Group 1 (G1) received real US-guided DN, Group 2 (G2) received placebo US-guided DN, and Group 3 (G3) was the control group. Both G1 and G2 subjects received the same procedure of US-guided DN, except the US monitor was turned off in G2, blinding the G2 subjects to the incorporation of faux US guidance. This arrangement created the placebo effect intended to permit comparison of their results to those who received actual US-guided DN. Outcome measures, including the visual analog scale (VAS) and Knee injury and Osteoarthritis Outcome Score (KOOS) subscales of pain, symptoms, and quality of life (QOL), were analyzed by repeated measures analysis of covariance (ANCOVA) for time effects and group effects. The data regarding the perception of receiving real US-guided DN or placebo US-guided DN were analyzed by the chi-squared test. The missing data were analyzed with the intention-to-treat (ITT) approach if more than 5% of the data were missing. Results: The placebo US-guided DN (G2) subjects had the same perceptions as the use of real US guidance in the advancement of DN (p<0.128). G1 had significantly higher pain reduction (VAS and KOOS-pain) than G2 and G3 at 8 weeks (both p<0.05) only. There was no significant difference between G2 and G3 at 8 weeks (both p>0.05). Conclusion: The method with the US monitor turned off during the application of DN is credible for blinding the participants and allowing researchers to incorporate faux US guidance. The validated placebo US-guided DN technique can aid in investigations of the effects of US-guided DN with short-term effects of pain reduction for patients with KOA. Acknowledgment: This work was supported by the Caritas Institute of Higher Education [grant number IDG200101].Keywords: ultrasound-guided dry needling, dry needling, knee osteoarthritis, physiotheraphy
Procedia PDF Downloads 12016380 A Study on the Performance of 2-PC-D Classification Model
Authors: Nurul Aini Abdul Wahab, Nor Syamim Halidin, Sayidatina Aisah Masnan, Nur Izzati Romli
Abstract:
There are many applications of principle component method for reducing the large set of variables in various fields. Fisher’s Discriminant function is also a popular tool for classification. In this research, the researcher focuses on studying the performance of Principle Component-Fisher’s Discriminant function in helping to classify rice kernels to their defined classes. The data were collected on the smells or odour of the rice kernel using odour-detection sensor, Cyranose. 32 variables were captured by this electronic nose (e-nose). The objective of this research is to measure how well a combination model, between principle component and linear discriminant, to be as a classification model. Principle component method was used to reduce all 32 variables to a smaller and manageable set of components. Then, the reduced components were used to develop the Fisher’s Discriminant function. In this research, there are 4 defined classes of rice kernel which are Aromatic, Brown, Ordinary and Others. Based on the output from principle component method, the 32 variables were reduced to only 2 components. Based on the output of classification table from the discriminant analysis, 40.76% from the total observations were correctly classified into their classes by the PC-Discriminant function. Indirectly, it gives an idea that the classification model developed has committed to more than 50% of misclassifying the observations. As a conclusion, the Fisher’s Discriminant function that was built on a 2-component from PCA (2-PC-D) is not satisfying to classify the rice kernels into its defined classes.Keywords: classification model, discriminant function, principle component analysis, variable reduction
Procedia PDF Downloads 33216379 The Effects of Culture and Language on Social Impression Formation from Voice Pleasantness: A Study with French and Iranian People
Authors: L. Bruckert, A. Mansourzadeh
Abstract:
The voice has a major influence on interpersonal communication in everyday life via the perception of pleasantness. The evolutionary perspective postulates that the mechanisms underlying the pleasantness judgments are universal adaptations that have evolved in the service of choosing a mate (through the process of sexual selection). From this point of view, the favorite voices would be those with more marked sexually dimorphic characteristics; for example, in men with lower voice pitch, pitch is the main criterion. On the other hand, one can postulate that the mechanisms involved are gradually established since childhood through exposure to the environment, and thus the prosodic elements could take precedence in everyday life communication as it conveys information about the speaker's attitude (willingness to communicate, interest toward the interlocutors). Our study focuses on voice pleasantness and its relationship with social impression formation, exploring both the spectral aspects (pitch, timbre) and the prosodic ones. In our study, we recorded the voices through two vocal corpus (five vowels and a reading text) of 25 French males speaking French and 25 Iranian males speaking Farsi. French listeners (40 male/40 female) listened to the French voices and made a judgment either on the voice's pleasantness or on the speaker (judgment about his intelligence, honesty, sociability). The regression analyses from our acoustic measures showed that the prosodic elements (for example, the intonation and the speech rate) are the most important criteria concerning pleasantness, whatever the corpus or the listener's gender. Moreover, the correlation analyses showed that the speakers with the voices judged as the most pleasant are considered the most intelligent, sociable, and honest. The voices in Farsi have been judged by 80 other French listeners (40 male/40 female), and we found the same effect of intonation concerning the judgment of pleasantness with the corpus «vowel» whereas with the corpus «text» the pitch is more important than the prosody. It may suggest that voice perception contains some elements invariant across culture/language, whereas others are influenced by the cultural/linguistic background of the listener. Shortly in the future, Iranian people will be asked to listen either to the French voices for half of them or to the Farsi voices for the other half and produce the same judgments as the French listeners. This experimental design could potentially make it possible to distinguish what is linked to culture and what is linked to language in the case of differences in voice perception.Keywords: cross-cultural psychology, impression formation, pleasantness, voice perception
Procedia PDF Downloads 6916378 The Effect of Body Positioning on Upper-Limb Arterial Occlusion Pressure and the Reliability of the Method during Blood Flow Restriction Training
Authors: Stefanos Karanasios, Charkleia Koutri, Maria Moutzouri, Sofia A. Xergia, Vasiliki Sakellari, George Gioftsos
Abstract:
The precise calculation of arterial occlusive pressure (AOP) is a critical step to accurately prescribe individualized pressures during blood flow restriction training (BFRT). AOP is usually measured in a supine position before training; however, previous reports suggested a significant influence in lower limb AOP across different body positions. The aim of the study was to investigate the effect of three different body positions on upper limb AOP and the reliability of the method for its standardization in clinical practice. Forty-two healthy participants (Mean age: 28.1, SD: ±7.7) underwent measurements of upper limb AOP in supine, seated, and standing positions by three blinded raters. A cuff with a manual pump and a pocket doppler ultrasound were used. A significantly higher upper limb AOP was found in seated compared with supine position (p < 0.031) and in supine compared with standing position (p < 0.031) by all raters. An excellent intraclass correlation coefficient (0.858- 0.984, p < 0.001) was found in all positions. Upper limb AOP is strongly dependent on body position changes. The appropriate measurement position should be selected to accurately calculate AOP before BFRT. The excellent inter-rater reliability and repeatability of the method suggest reliable and consistent results across repeated measurements.Keywords: Kaatsu training, blood flow restriction training, arterial occlusion, reliability
Procedia PDF Downloads 21216377 High-Yield Synthesis of Nanohybrid Shish-Kebab of Polyethylene on Carbon NanoFillers
Authors: Dilip Depan, Austin Simoneaux, William Chirdon, Ahmed Khattab
Abstract:
In this study, we present a novel approach to synthesize polymer nanocomposites with nanohybrid shish-kebab architecture (NHSK). For this low-density and high density polyethylene (PE) was crystallized on various carbon nano-fillers using a novel and convenient method to prepare high-yield NHSK. Polymer crystals grew epitaxially on carbon nano-fillers using a solution crystallization method. The mixture of polymer and carbon fillers in xylene was flocculated and precipitated in ethanol to improve the product yield. Carbon nanofillers of varying diameter were also used as a nucleating template for polymer crystallization. The morphology of the prepared nanocomposites was characterized scanning electron microscopy (SEM), while differential scanning calorimetry (DSC) was used to quantify the amount of crystalline polymer. Interestingly, whatever the diameter of the carbon nanofiller is, the lamellae of PE is always perpendicular to the long axis of nanofiller. Surface area analysis was performed using BET. Our results indicated that carbon nanofillers of varying diameter can be used to effectively nucleate the crystallization of polymer. The effect of molecular weight and concentration of the polymer was discussed on the basis of chain mobility and crystallization capability of the polymer matrix. Our work shows a facile, rapid, yet high-yield production method to form polymer nanocomposites to reveal application potential of NHSK architecture.Keywords: carbon nanotubes, polyethylene, nanohybrid shish-kebab, crystallization, morphology
Procedia PDF Downloads 32916376 Deep Feature Augmentation with Generative Adversarial Networks for Class Imbalance Learning in Medical Images
Authors: Rongbo Shen, Jianhua Yao, Kezhou Yan, Kuan Tian, Cheng Jiang, Ke Zhou
Abstract:
This study proposes a generative adversarial networks (GAN) framework to perform synthetic sampling in feature space, i.e., feature augmentation, to address the class imbalance problem in medical image analysis. A feature extraction network is first trained to convert images into feature space. Then the GAN framework incorporates adversarial learning to train a feature generator for the minority class through playing a minimax game with a discriminator. The feature generator then generates features for minority class from arbitrary latent distributions to balance the data between the majority class and the minority class. Additionally, a data cleaning technique, i.e., Tomek link, is employed to clean up undesirable conflicting features introduced from the feature augmentation and thus establish well-defined class clusters for the training. The experiment section evaluates the proposed method on two medical image analysis tasks, i.e., mass classification on mammogram and cancer metastasis classification on histopathological images. Experimental results suggest that the proposed method obtains superior or comparable performance over the state-of-the-art counterparts. Compared to all counterparts, our proposed method improves more than 1.5 percentage of accuracy.Keywords: class imbalance, synthetic sampling, feature augmentation, generative adversarial networks, data cleaning
Procedia PDF Downloads 12716375 An Integrated Architecture of E-Learning System to Digitize the Learning Method
Authors: M. Touhidul Islam Sarker, Mohammod Abul Kashem
Abstract:
The purpose of this paper is to improve the e-learning system and digitize the learning method in the educational sector. The learner will login into e-learning platform and easily access the digital content, the content can be downloaded and take an assessment for evaluation. Learner can get access to these digital resources by using tablet, computer, and smart phone also. E-learning system can be defined as teaching and learning with the help of multimedia technologies and the internet by access to digital content. E-learning replacing the traditional education system through information and communication technology-based learning. This paper has designed and implemented integrated e-learning system architecture with University Management System. Moodle (Modular Object-Oriented Dynamic Learning Environment) is the best e-learning system, but the problem of Moodle has no school or university management system. In this research, we have not considered the school’s student because they are out of internet facilities. That’s why we considered the university students because they have the internet access and used technologies. The University Management System has different types of activities such as student registration, account management, teacher information, semester registration, staff information, etc. If we integrated these types of activity or module with Moodle, then we can overcome the problem of Moodle, and it will enhance the e-learning system architecture which makes effective use of technology. This architecture will give the learner to easily access the resources of e-learning platform anytime or anywhere which digitizes the learning method.Keywords: database, e-learning, LMS, Moodle
Procedia PDF Downloads 18816374 An Improved Two-dimensional Ordered Statistical Constant False Alarm Detection
Authors: Weihao Wang, Zhulin Zong
Abstract:
Two-dimensional ordered statistical constant false alarm detection is a widely used method for detecting weak target signals in radar signal processing applications. The method is based on analyzing the statistical characteristics of the noise and clutter present in the radar signal and then using this information to set an appropriate detection threshold. In this approach, the reference cell of the unit to be detected is divided into several reference subunits. These subunits are used to estimate the noise level and adjust the detection threshold, with the aim of minimizing the false alarm rate. By using an ordered statistical approach, the method is able to effectively suppress the influence of clutter and noise, resulting in a low false alarm rate. The detection process involves a number of steps, including filtering the input radar signal to remove any noise or clutter, estimating the noise level based on the statistical characteristics of the reference subunits, and finally, setting the detection threshold based on the estimated noise level. One of the main advantages of two-dimensional ordered statistical constant false alarm detection is its ability to detect weak target signals in the presence of strong clutter and noise. This is achieved by carefully analyzing the statistical properties of the signal and using an ordered statistical approach to estimate the noise level and adjust the detection threshold. In conclusion, two-dimensional ordered statistical constant false alarm detection is a powerful technique for detecting weak target signals in radar signal processing applications. By dividing the reference cell into several subunits and using an ordered statistical approach to estimate the noise level and adjust the detection threshold, this method is able to effectively suppress the influence of clutter and noise and maintain a low false alarm rate.Keywords: two-dimensional, ordered statistical, constant false alarm, detection, weak target signals
Procedia PDF Downloads 7816373 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning
Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic
Abstract:
Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method
Procedia PDF Downloads 249