Search results for: the COS method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18930

Search results for: the COS method

16410 On Confidence Intervals for the Difference between Inverse of Normal Means with Known Coefficients of Variation

Authors: Arunee Wongkhao, Suparat Niwitpong, Sa-aat Niwitpong

Abstract:

In this paper, we propose two new confidence intervals for the difference between the inverse of normal means with known coefficients of variation. One of these two confidence intervals for this problem is constructed based on the generalized confidence interval and the other confidence interval is constructed based on the closed form method of variance estimation. We examine the performance of these confidence intervals in terms of coverage probabilities and expected lengths via Monte Carlo simulation.

Keywords: coverage probability, expected length, inverse of normal mean, coefficient of variation, generalized confidence interval, closed form method of variance estimation

Procedia PDF Downloads 309
16409 [Keynote Talk]: Analysis of One Dimensional Advection Diffusion Model Using Finite Difference Method

Authors: Vijay Kumar Kukreja, Ravneet Kaur

Abstract:

In this paper, one dimensional advection diffusion model is analyzed using finite difference method based on Crank-Nicolson scheme. A practical problem of filter cake washing of chemical engineering is analyzed. The model is converted into dimensionless form. For the grid Ω × ω = [0, 1] × [0, T], the Crank-Nicolson spatial derivative scheme is used in space domain and forward difference scheme is used in time domain. The scheme is found to be unconditionally convergent, stable, first order accurate in time and second order accurate in space domain. For a test problem, numerical results are compared with the analytical ones for different values of parameter.

Keywords: Crank-Nicolson scheme, Lax-Richtmyer theorem, stability, consistency, Peclet number, Greschgorin circle

Procedia PDF Downloads 223
16408 Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis

Authors: Yao Cheng, Weihua Zhang

Abstract:

Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability.

Keywords: bearing, complete ensemble empirical mode decomposition with adaptive noise, fault diagnosis, multipoint optimal minimum entropy deconvolution

Procedia PDF Downloads 374
16407 The Wellness Wheel: A Tool to Reimagine Schooling

Authors: Jennifer F. Moore

Abstract:

The wellness wheel as a tool for school growth and change is currently being piloted by a startup school in Chicago, IL. In this case study, members of the school community engaged in the appreciative inquiry process to plan their organizational development around the wellness wheel. The wellness wheel (comprised of physical, emotional, social, spiritual, environmental, cognitive, and financial wellness) is used as a planning tool by teachers, students, parents, and administrators. Through the appreciative inquiry method of change, the community is reflecting on their individual level of wellness and developing organizational structures to ensure the well being of children and adults. The goal of the case study is to test the appropriateness of the use of appreciative inquiry (as a method) and the wellness wheel (as a tool) for school growth and development. Findings of the case study will be realized by the conference. The research is in process now.

Keywords: education, schools, well being, wellness

Procedia PDF Downloads 178
16406 The Complete Modal Derivatives

Authors: Sebastian Andersen, Peter N. Poulsen

Abstract:

The use of basis projection in the structural dynamic analysis is frequently applied. The purpose of the method is to improve the computational efficiency, while maintaining a high solution accuracy, by projection the governing equations onto a small set of carefully selected basis vectors. The present work considers basis projection in kinematic nonlinear systems with a focus on two widely used basis vectors; the system mode shapes and their modal derivatives. Particularly the latter basis vectors are given special attention since only approximate modal derivatives have been used until now. In the present work the complete modal derivatives, derived from perturbation methods, are presented and compared to the previously applied approximate modal derivatives. The correctness of the complete modal derivatives is illustrated by use of an example of a harmonically loaded kinematic nonlinear structure modeled by beam elements.

Keywords: basis projection, finite element method, kinematic nonlinearities, modal derivatives

Procedia PDF Downloads 237
16405 Stress Variation of Underground Building Structure during Top-Down Construction

Authors: Soo-yeon Seo, Seol-ki Kim, Su-jin Jung

Abstract:

In the construction of a building, it is necessary to minimize construction period and secure enough work space for stacking of materials during the construction especially in city area. In this manner, various top-down construction methods have been developed and widely used in Korea. This paper investigates the stress variation of underground structure of a building constructed by using SPS (Strut as Permanent System) known as a top-down method in Korea through an analytical approach. Various types of earth pressure distribution related to ground condition were considered in the structural analysis of an example structure at each step of the excavation. From the analysis, the most high member force acting on beams was found when the ground type was medium sandy soil and a stress concentration was found in corner area.

Keywords: construction of building, top-down construction method, earth pressure distribution, member force, stress concentration

Procedia PDF Downloads 305
16404 Method of Cluster Based Cross-Domain Knowledge Acquisition for Biologically Inspired Design

Authors: Shen Jian, Hu Jie, Ma Jin, Peng Ying Hong, Fang Yi, Liu Wen Hai

Abstract:

Biologically inspired design inspires inventions and new technologies in the field of engineering by mimicking functions, principles, and structures in the biological domain. To deal with the obstacles of cross-domain knowledge acquisition in the existing biologically inspired design process, functional semantic clustering based on functional feature semantic correlation and environmental constraint clustering composition based on environmental characteristic constraining adaptability are proposed. A knowledge cell clustering algorithm and the corresponding prototype system is developed. Finally, the effectiveness of the method is verified by the visual prosthetic device design.

Keywords: knowledge clustering, knowledge acquisition, knowledge based engineering, knowledge cell, biologically inspired design

Procedia PDF Downloads 426
16403 Farmers’ Use of Indigenous Knowledge System (IKS) for Selected Arable Crops Production in Ondo State

Authors: A. M. Omoare, E. O. Fakoya

Abstract:

This study sought to determine the use of indigenous knowledge for selected arable crops production in Ondo Sate. A multistage sampling method was used and 112 arable crops farmers were systematically selected. Data were analyzed using both descriptive and inferential statistics. The results showed that majority of the sampled farmers were male (75.90%) About 75% were married with children. Large proportion of them (62.61%) were within the ages of 30-49 years. Most of them have spent about 10 years in farming (58.92%). The highest raw scores of use of indigenous knowledge were found in planting on mound in yam production, use of native medicine and scare-crow method in controlling birds in rice production, timely planting of locally developed resistant varieties in cassava production and soaking of maize seeds in water to determine their viability with raw scores of 313, 310, 305, 303, and 300 respectively, while the lowest raw scores was obtained in use of bell method in controlling birds in rice production with raw scores of 210. The findings established that proverbs (59.8%) and taboos (55.36%) were the most commonly used media in transmitting indigenous knowledge by arable crop farmers. The multiple regression analysis result revealed that age of the farmers and farming experience had a significant relationship with the use of indigenous knowledge of the farmers which gave R2=0.83 for semi-log function form of equation which is the land equation. The policy implication is that indigenous knowledge should provide a basis for designing modern technologies to enhance sustainable agricultural development.

Keywords: Arable Crop Production, extent of use, indigenous knowledge, farming experience

Procedia PDF Downloads 571
16402 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 350
16401 Earnings vs Cash Flows: The Valuation Perspective

Authors: Megha Agarwal

Abstract:

The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.

Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)

Procedia PDF Downloads 376
16400 Improvement of Soft Clay Soil with Biopolymer

Authors: Majid Bagherinia

Abstract:

Lime and cement are frequently used as binders in the Deep Mixing Method (DMM) to improve soft clay soils. The most significant disadvantages of these materials are carbon dioxide emissions and the consumption of natural resources. In this study, three different biopolymers, guar gum, locust bean gum, and sodium alginate, were investigated for the improvement of soft clay using DMM. In the experimental study, the effects of the additive ratio and curing time on the Unconfined Compressive Strength (UCS) of stabilized specimens were investigated. According to the results, the UCS values of the specimens increased as the additive ratio and curing time increased. The most effective additive was sodium alginate, and the highest strength was obtained after 28 days.

Keywords: deep mixing method, soft clays, ground improvement, biopolymers, unconfined compressive strength

Procedia PDF Downloads 80
16399 Degradation of Neonicotinoid Insecticides (Acetamiprid and Imidacloprid) Using Biochar of Rice Husk and Fruit Peels

Authors: Mateen Abbas, Abdul Muqeet Khan, Sadia Bashir, Muhammad Awais Khalid, Aamir Ghafoor, Zara Hussain, Mashal Shahid

Abstract:

The irrational use of insecticides in everyday life has drawn attention worldwide towards its harmful effects. To mitigate the toxic effects of insecticides to humans, present study was planned on the degradation/detoxification of the neonicotinoid insecticides including imidacloprid and acetamiprid. Biocarbon of fruit peels (Banana & Watermelon) and biochar (activated or non-activated) of rice husk was utilized as adsorbents for degradation of selected pesticides. Both activated and non-activated biochar were prepared for treatment and then applied in different concentrations (0.5 to 2.0 ppm) and dosage (1.0 to 2.5g) to insecticides (Acetamiprid & Imidacloprid) as well as studied at different times (30-120 minutes). Reverse Phase-High Performance Liquid Chromatography (RP-HPLC) coupled with Photodiode array detector was used to quantify the insecticides. Results depicted that activated biochar of rice husk minimized the 73% concentrations of both insecticides however, watermelon activated biocarbon degraded 72% of imidacloprid and 56% of acetamiprid. Results proved the efficiency of the method employed and it was also inferred that high concentration of biocarbon resulted in larger percentage of degradation. The applied method is cheaper, easy and accessible that can be used to minimize the pesticide residues in animal feed. Degradation using biochar proved significant degradation, eco-friendly and economic method to reduce toxicity of insecticides.

Keywords: insecticides, acetamiprid, imidacloprid, biochar, HPLC

Procedia PDF Downloads 153
16398 Effects of Surface Roughness on a Unimorph Piezoelectric Micro-Electro-Mechanical Systems Vibrational Energy Harvester Using Finite Element Method Modeling

Authors: Jean Marriz M. Manzano, Marc D. Rosales, Magdaleno R. Vasquez Jr., Maria Theresa G. De Leon

Abstract:

This paper discusses the effects of surface roughness on a cantilever beam vibrational energy harvester. A silicon sample was fabricated using MEMS fabrication processes. When etching silicon using deep reactive ion etching (DRIE) at large etch depths, rougher surfaces are observed as a result of increased response in process pressure, amount of coil power and increased helium backside cooling readings. To account for the effects of surface roughness on the characteristics of the cantilever beam, finite element method (FEM) modeling was performed using actual roughness data from fabricated samples. It was found that when etching about 550um of silicon, root mean square roughness parameter, Sq, varies by 1 to 3 um (at 100um thick) across a 6-inch wafer. Given this Sq variation, FEM simulations predict an 8 to148 Hz shift in the resonant frequency while having no significant effect on the output power. The significant shift in the resonant frequency implies that careful consideration of surface roughness from fabrication processes must be done when designing energy harvesters.

Keywords: deep reactive ion etching, finite element method, microelectromechanical systems, multiphysics analysis, surface roughness, vibrational energy harvester

Procedia PDF Downloads 121
16397 Structural and Optical Characterization of Silica@PbS Core–Shell Nanoparticles

Authors: A. Pourahmad, Sh. Gharipour

Abstract:

The present work describes the preparation and characterization of nanosized SiO2@PbS core-shell particles by using a simple wet chemical route. This method utilizes silica spheres formation followed by successive ionic layer adsorption and reaction method assisted lead sulphide shell layer formation. The final product was characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), UV–vis spectroscopic, infrared spectroscopy (IR) and transmission electron microscopy (TEM) experiments. The morphological studies revealed the uniformity in size distribution with core size of 250 nm and shell thickness of 18 nm. The electron microscopic images also indicate the irregular morphology of lead sulphide shell layer. The structural studies indicate the face-centered cubic system of PbS shell with no other trace for impurities in the crystal structure.

Keywords: core-shell, nanostructure, semiconductor, optical property, XRD

Procedia PDF Downloads 299
16396 Theoretical Analysis of Photoassisted Field Emission near the Metal Surface Using Transfer Hamiltonian Method

Authors: Rosangliana Chawngthu, Ramkumar K. Thapa

Abstract:

A model calculation of photoassisted field emission current (PFEC) by using transfer Hamiltonian method will be present here. When the photon energy is incident on the surface of the metals, such that the energy of a photon is usually less than the work function of the metal under investigation. The incident radiation photo excites the electrons to a final state which lies below the vacuum level; the electrons are confined within the metal surface. A strong static electric field is then applied to the surface of the metal which causes the photoexcited electrons to tunnel through the surface potential barrier into the vacuum region and constitutes the considerable current called photoassisted field emission current. The incident radiation is usually a laser beam, causes the transition of electrons from the initial state to the final state and the matrix element for this transition will be written. For the calculation of PFEC, transfer Hamiltonian method is used. The initial state wavefunction is calculated by using Kronig-Penney potential model. The effect of the matrix element will also be studied. An appropriate dielectric model for the surface region of the metal will be used for the evaluation of vector potential. FORTRAN programme is used for the calculation of PFEC. The results will be checked with experimental data and the theoretical results.

Keywords: photoassisted field emission, transfer Hamiltonian, vector potential, wavefunction

Procedia PDF Downloads 226
16395 Nonlinear Adaptive PID Control for a Semi-Batch Reactor Based on an RBF Network

Authors: Magdi. M. Nabi, Ding-Li Yu

Abstract:

Control of a semi-batch polymerization reactor using an adaptive radial basis function (RBF) neural network method is investigated in this paper. A neural network inverse model is used to estimate the valve position of the reactor; this method can identify the controlled system with the RBF neural network identifier. The weights of the adaptive PID controller are timely adjusted based on the identification of the plant and self-learning capability of RBFNN. A PID controller is used in the feedback control to regulate the actual temperature by compensating the neural network inverse model output. Simulation results show that the proposed control has strong adaptability, robustness and satisfactory control performance and the nonlinear system is achieved.

Keywords: Chylla-Haase polymerization reactor, RBF neural networks, feed-forward, feedback control

Procedia PDF Downloads 702
16394 Validation of a Placebo Method with Potential for Blinding in Ultrasound-Guided Dry Needling

Authors: Johnson C. Y. Pang, Bo Peng, Kara K. L. Reeves, Allan C. L. Fud

Abstract:

Objective: Dry needling (DN) has long been used as a treatment method for various musculoskeletal pain conditions. However, the evidence level of the studies was low due to the limitations of the methodology. Lack of randomization and inappropriate blinding is potentially the main sources of bias. A method that can differentiate clinical results due to the targeted experimental procedure from its placebo effect is needed to enhance the validity of the trial. Therefore, this study aimed to validate the method as a placebo ultrasound(US)-guided DN for patients with knee osteoarthritis (KOA). Design: This is a randomized controlled trial (RCT). Ninety subjects (25 males and 65 females) aged between 51 and 80 (61.26 ± 5.57) with radiological KOA were recruited and randomly assigned into three groups with a computer program. Group 1 (G1) received real US-guided DN, Group 2 (G2) received placebo US-guided DN, and Group 3 (G3) was the control group. Both G1 and G2 subjects received the same procedure of US-guided DN, except the US monitor was turned off in G2, blinding the G2 subjects to the incorporation of faux US guidance. This arrangement created the placebo effect intended to permit comparison of their results to those who received actual US-guided DN. Outcome measures, including the visual analog scale (VAS) and Knee injury and Osteoarthritis Outcome Score (KOOS) subscales of pain, symptoms, and quality of life (QOL), were analyzed by repeated measures analysis of covariance (ANCOVA) for time effects and group effects. The data regarding the perception of receiving real US-guided DN or placebo US-guided DN were analyzed by the chi-squared test. The missing data were analyzed with the intention-to-treat (ITT) approach if more than 5% of the data were missing. Results: The placebo US-guided DN (G2) subjects had the same perceptions as the use of real US guidance in the advancement of DN (p<0.128). G1 had significantly higher pain reduction (VAS and KOOS-pain) than G2 and G3 at 8 weeks (both p<0.05) only. There was no significant difference between G2 and G3 at 8 weeks (both p>0.05). Conclusion: The method with the US monitor turned off during the application of DN is credible for blinding the participants and allowing researchers to incorporate faux US guidance. The validated placebo US-guided DN technique can aid in investigations of the effects of US-guided DN with short-term effects of pain reduction for patients with KOA. Acknowledgment: This work was supported by the Caritas Institute of Higher Education [grant number IDG200101].

Keywords: ultrasound-guided dry needling, dry needling, knee osteoarthritis, physiotheraphy

Procedia PDF Downloads 120
16393 A Study on the Performance of 2-PC-D Classification Model

Authors: Nurul Aini Abdul Wahab, Nor Syamim Halidin, Sayidatina Aisah Masnan, Nur Izzati Romli

Abstract:

There are many applications of principle component method for reducing the large set of variables in various fields. Fisher’s Discriminant function is also a popular tool for classification. In this research, the researcher focuses on studying the performance of Principle Component-Fisher’s Discriminant function in helping to classify rice kernels to their defined classes. The data were collected on the smells or odour of the rice kernel using odour-detection sensor, Cyranose. 32 variables were captured by this electronic nose (e-nose). The objective of this research is to measure how well a combination model, between principle component and linear discriminant, to be as a classification model. Principle component method was used to reduce all 32 variables to a smaller and manageable set of components. Then, the reduced components were used to develop the Fisher’s Discriminant function. In this research, there are 4 defined classes of rice kernel which are Aromatic, Brown, Ordinary and Others. Based on the output from principle component method, the 32 variables were reduced to only 2 components. Based on the output of classification table from the discriminant analysis, 40.76% from the total observations were correctly classified into their classes by the PC-Discriminant function. Indirectly, it gives an idea that the classification model developed has committed to more than 50% of misclassifying the observations. As a conclusion, the Fisher’s Discriminant function that was built on a 2-component from PCA (2-PC-D) is not satisfying to classify the rice kernels into its defined classes.

Keywords: classification model, discriminant function, principle component analysis, variable reduction

Procedia PDF Downloads 332
16392 The Effect of Body Positioning on Upper-Limb Arterial Occlusion Pressure and the Reliability of the Method during Blood Flow Restriction Training

Authors: Stefanos Karanasios, Charkleia Koutri, Maria Moutzouri, Sofia A. Xergia, Vasiliki Sakellari, George Gioftsos

Abstract:

The precise calculation of arterial occlusive pressure (AOP) is a critical step to accurately prescribe individualized pressures during blood flow restriction training (BFRT). AOP is usually measured in a supine position before training; however, previous reports suggested a significant influence in lower limb AOP across different body positions. The aim of the study was to investigate the effect of three different body positions on upper limb AOP and the reliability of the method for its standardization in clinical practice. Forty-two healthy participants (Mean age: 28.1, SD: ±7.7) underwent measurements of upper limb AOP in supine, seated, and standing positions by three blinded raters. A cuff with a manual pump and a pocket doppler ultrasound were used. A significantly higher upper limb AOP was found in seated compared with supine position (p < 0.031) and in supine compared with standing position (p < 0.031) by all raters. An excellent intraclass correlation coefficient (0.858- 0.984, p < 0.001) was found in all positions. Upper limb AOP is strongly dependent on body position changes. The appropriate measurement position should be selected to accurately calculate AOP before BFRT. The excellent inter-rater reliability and repeatability of the method suggest reliable and consistent results across repeated measurements.

Keywords: Kaatsu training, blood flow restriction training, arterial occlusion, reliability

Procedia PDF Downloads 212
16391 High-Yield Synthesis of Nanohybrid Shish-Kebab of Polyethylene on Carbon NanoFillers

Authors: Dilip Depan, Austin Simoneaux, William Chirdon, Ahmed Khattab

Abstract:

In this study, we present a novel approach to synthesize polymer nanocomposites with nanohybrid shish-kebab architecture (NHSK). For this low-density and high density polyethylene (PE) was crystallized on various carbon nano-fillers using a novel and convenient method to prepare high-yield NHSK. Polymer crystals grew epitaxially on carbon nano-fillers using a solution crystallization method. The mixture of polymer and carbon fillers in xylene was flocculated and precipitated in ethanol to improve the product yield. Carbon nanofillers of varying diameter were also used as a nucleating template for polymer crystallization. The morphology of the prepared nanocomposites was characterized scanning electron microscopy (SEM), while differential scanning calorimetry (DSC) was used to quantify the amount of crystalline polymer. Interestingly, whatever the diameter of the carbon nanofiller is, the lamellae of PE is always perpendicular to the long axis of nanofiller. Surface area analysis was performed using BET. Our results indicated that carbon nanofillers of varying diameter can be used to effectively nucleate the crystallization of polymer. The effect of molecular weight and concentration of the polymer was discussed on the basis of chain mobility and crystallization capability of the polymer matrix. Our work shows a facile, rapid, yet high-yield production method to form polymer nanocomposites to reveal application potential of NHSK architecture.

Keywords: carbon nanotubes, polyethylene, nanohybrid shish-kebab, crystallization, morphology

Procedia PDF Downloads 329
16390 Deep Feature Augmentation with Generative Adversarial Networks for Class Imbalance Learning in Medical Images

Authors: Rongbo Shen, Jianhua Yao, Kezhou Yan, Kuan Tian, Cheng Jiang, Ke Zhou

Abstract:

This study proposes a generative adversarial networks (GAN) framework to perform synthetic sampling in feature space, i.e., feature augmentation, to address the class imbalance problem in medical image analysis. A feature extraction network is first trained to convert images into feature space. Then the GAN framework incorporates adversarial learning to train a feature generator for the minority class through playing a minimax game with a discriminator. The feature generator then generates features for minority class from arbitrary latent distributions to balance the data between the majority class and the minority class. Additionally, a data cleaning technique, i.e., Tomek link, is employed to clean up undesirable conflicting features introduced from the feature augmentation and thus establish well-defined class clusters for the training. The experiment section evaluates the proposed method on two medical image analysis tasks, i.e., mass classification on mammogram and cancer metastasis classification on histopathological images. Experimental results suggest that the proposed method obtains superior or comparable performance over the state-of-the-art counterparts. Compared to all counterparts, our proposed method improves more than 1.5 percentage of accuracy.

Keywords: class imbalance, synthetic sampling, feature augmentation, generative adversarial networks, data cleaning

Procedia PDF Downloads 127
16389 An Integrated Architecture of E-Learning System to Digitize the Learning Method

Authors: M. Touhidul Islam Sarker, Mohammod Abul Kashem

Abstract:

The purpose of this paper is to improve the e-learning system and digitize the learning method in the educational sector. The learner will login into e-learning platform and easily access the digital content, the content can be downloaded and take an assessment for evaluation. Learner can get access to these digital resources by using tablet, computer, and smart phone also. E-learning system can be defined as teaching and learning with the help of multimedia technologies and the internet by access to digital content. E-learning replacing the traditional education system through information and communication technology-based learning. This paper has designed and implemented integrated e-learning system architecture with University Management System. Moodle (Modular Object-Oriented Dynamic Learning Environment) is the best e-learning system, but the problem of Moodle has no school or university management system. In this research, we have not considered the school’s student because they are out of internet facilities. That’s why we considered the university students because they have the internet access and used technologies. The University Management System has different types of activities such as student registration, account management, teacher information, semester registration, staff information, etc. If we integrated these types of activity or module with Moodle, then we can overcome the problem of Moodle, and it will enhance the e-learning system architecture which makes effective use of technology. This architecture will give the learner to easily access the resources of e-learning platform anytime or anywhere which digitizes the learning method.

Keywords: database, e-learning, LMS, Moodle

Procedia PDF Downloads 188
16388 An Improved Two-dimensional Ordered Statistical Constant False Alarm Detection

Authors: Weihao Wang, Zhulin Zong

Abstract:

Two-dimensional ordered statistical constant false alarm detection is a widely used method for detecting weak target signals in radar signal processing applications. The method is based on analyzing the statistical characteristics of the noise and clutter present in the radar signal and then using this information to set an appropriate detection threshold. In this approach, the reference cell of the unit to be detected is divided into several reference subunits. These subunits are used to estimate the noise level and adjust the detection threshold, with the aim of minimizing the false alarm rate. By using an ordered statistical approach, the method is able to effectively suppress the influence of clutter and noise, resulting in a low false alarm rate. The detection process involves a number of steps, including filtering the input radar signal to remove any noise or clutter, estimating the noise level based on the statistical characteristics of the reference subunits, and finally, setting the detection threshold based on the estimated noise level. One of the main advantages of two-dimensional ordered statistical constant false alarm detection is its ability to detect weak target signals in the presence of strong clutter and noise. This is achieved by carefully analyzing the statistical properties of the signal and using an ordered statistical approach to estimate the noise level and adjust the detection threshold. In conclusion, two-dimensional ordered statistical constant false alarm detection is a powerful technique for detecting weak target signals in radar signal processing applications. By dividing the reference cell into several subunits and using an ordered statistical approach to estimate the noise level and adjust the detection threshold, this method is able to effectively suppress the influence of clutter and noise and maintain a low false alarm rate.

Keywords: two-dimensional, ordered statistical, constant false alarm, detection, weak target signals

Procedia PDF Downloads 78
16387 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning

Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic

Abstract:

Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.

Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method

Procedia PDF Downloads 249
16386 Frequency Modulation in Vibro-Acoustic Modulation Method

Authors: D. Liu, D. M. Donskoy

Abstract:

The vibroacoustic modulation method is based on the modulation effect of high-frequency ultrasonic wave (carrier) by low-frequency vibration in the presence of various defects, primarily contact-type such as cracks, delamination, etc. The presence and severity of the defect are measured by the ratio of the spectral sidebands and the carrier in the spectrum of the modulated signal. This approach, however, does not differentiate between amplitude and frequency modulations, AM and FM, respectfully. It was experimentally shown that both modulations could be present in the spectrum, yet each modulation may be associated with different physical mechanisms. AM mechanisms are quite well understood and widely covered in the literature. This paper is a first attempt to explain the generation mechanisms of FM and its correlation with the flaw properties. Here we proposed two possible mechanisms leading to FM modulation based on nonlinear local defect resonance and dynamic acousto-elastic models.

Keywords: non-destructive testing, nonlinear acoustics, structural health monitoring, acousto-elasticity, local defect resonance

Procedia PDF Downloads 152
16385 Geospatial Land Suitability Modeling for Biofuel Crop Using AHP

Authors: Naruemon Phongaksorn

Abstract:

The biofuel consumption has increased significantly over the decade resulting in the increasing request on agricultural land for biofuel feedstocks. However, the biofuel feedstocks are already stressed of having low productivity owing to inappropriate agricultural practices without considering suitability of crop land. This research evaluates the land suitability using GIS-integrated Analytic Hierarchy Processing (AHP) of biofuel crops: cassava, at Chachoengsao province, in Thailand. AHP method that has been widely accepted for land use planning. The objective of this study is compared between AHP method and the most limiting group of land characteristics method (classical approach). The reliable results of the land evaluation were tested against the crop performance assessed by the field investigation in 2015. In addition to the socio-economic land suitability, the expected availability of raw materials for biofuel production to meet the local biofuel demand, are also estimated. The results showed that the AHP could classify and map the physical land suitability with 10% higher overall accuracy than the classical approach. The Chachoengsao province showed high and moderate socio-economic land suitability for cassava. Conditions in the Chachoengsao province were also favorable for cassava plantation, as the expected raw material needed to support ethanol production matched that of ethanol plant capacity of this province. The GIS integrated AHP for biofuel crops land suitability evaluation appears to be a practical way of sustainably meeting biofuel production demand.

Keywords: Analytic Hierarchy Processing (AHP), Cassava, Geographic Information Systems, Land suitability

Procedia PDF Downloads 201
16384 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure

Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik

Abstract:

Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.

Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT

Procedia PDF Downloads 128
16383 Investigation on Cost Reflective Network Pricing and Modified Cost Reflective Network Pricing Methods for Transmission Service Charges

Authors: K. Iskandar, N. H. Radzi, R. Aziz, M. S. Kamaruddin, M. N. Abdullah, S. A. Jumaat

Abstract:

Nowadays many developing countries have been undergoing a restructuring process in the power electricity industry. This process has involved disaggregating former state-owned monopoly utilities both vertically and horizontally and introduced competition. The restructuring process has been implemented by the Australian National Electricity Market (NEM) started from 13 December 1998, began operating as a wholesale market for supply of electricity to retailers and end-users in Queensland, New South Wales, the Australian Capital Territory, Victoria and South Australia. In this deregulated market, one of the important issues is the transmission pricing. Transmission pricing is a service that recovers existing and new cost of the transmission system. The regulation of the transmission pricing is important in determining whether the transmission service system is economically beneficial to both side of the users and utilities. Therefore, an efficient transmission pricing methodology plays an important role in the Australian NEM. In this paper, the transmission pricing methodologies that have been implemented by the Australian NEM which are the Cost Reflective Network Pricing (CRNP) and Modified Cost Reflective Network Pricing (MCRNP) methods are investigated for allocating the transmission service charges to the transmission users. A case study using 6-bus system is used in order to identify the best method that reflects a fair and equitable transmission service charge.

Keywords: cost-reflective network pricing method, modified cost-reflective network pricing method, restructuring process, transmission pricing

Procedia PDF Downloads 445
16382 A Fast Optimizer for Large-scale Fulfillment Planning based on Genetic Algorithm

Authors: Choonoh Lee, Seyeon Park, Dongyun Kang, Jaehyeong Choi, Soojee Kim, Younggeun Kim

Abstract:

Market Kurly is the first South Korean online grocery retailer that guarantees same-day, overnight shipping. More than 1.6 million customers place an average of 4.7 million orders and add 3 to 14 products into a cart per month. The company has sold almost 30,000 kinds of various products in the past 6 months, including food items, cosmetics, kitchenware, toys for kids/pets, and even flowers. The company is operating and expanding multiple dry, cold, and frozen fulfillment centers in order to store and ship these products. Due to the scale and complexity of the fulfillment, pick-pack-ship processes are planned and operated in batches, and thus, the planning that decides the batch of the customers’ orders is a critical factor in overall productivity. This paper introduces a metaheuristic optimization method that reduces the complexity of batch processing in a fulfillment center. The method is an iterative genetic algorithm with heuristic creation and evolution strategies; it aims to group similar orders into pick-pack-ship batches to minimize the total number of distinct products. With a well-designed approach to create initial genes, the method produces streamlined plans, up to 13.5% less complex than the actual plans carried out in the company’s fulfillment centers in the previous months. Furthermore, our digital-twin simulations show that the optimized plans can reduce 3% of operation time for packing, which is the most complex and time-consuming task in the process. The optimization method implements a multithreading design on the Spring framework to support the company’s warehouse management systems in near real-time, finding a solution for 4,000 orders within 5 to 7 seconds on an AWS c5.2xlarge instance.

Keywords: fulfillment planning, genetic algorithm, online grocery retail, optimization

Procedia PDF Downloads 83
16381 Lossless Secret Image Sharing Based on Integer Discrete Cosine Transform

Authors: Li Li, Ahmed A. Abd El-Latif, Aya El-Fatyany, Mohamed Amin

Abstract:

This paper proposes a new secret image sharing method based on integer discrete cosine transform (IntDCT). It first transforms the original image into the frequency domain (DCT coefficients) using IntDCT, which are operated on each block with size 8*8. Then, it generates shares among each DCT coefficients in the same place of each block, that is, all the DC components are used to generate DC shares, the ith AC component in each block are utilized to generate ith AC shares, and so on. The DC and AC shares components with the same number are combined together to generate DCT shadows. Experimental results and analyses show that the proposed method can recover the original image lossless than those methods based on traditional DCT and is more sensitive to tiny change in both the coefficients and the content of the image.

Keywords: secret image sharing, integer DCT, lossless recovery, sensitivity

Procedia PDF Downloads 398