Search results for: inverse method
16309 The Complete Modal Derivatives
Authors: Sebastian Andersen, Peter N. Poulsen
Abstract:
The use of basis projection in the structural dynamic analysis is frequently applied. The purpose of the method is to improve the computational efficiency, while maintaining a high solution accuracy, by projection the governing equations onto a small set of carefully selected basis vectors. The present work considers basis projection in kinematic nonlinear systems with a focus on two widely used basis vectors; the system mode shapes and their modal derivatives. Particularly the latter basis vectors are given special attention since only approximate modal derivatives have been used until now. In the present work the complete modal derivatives, derived from perturbation methods, are presented and compared to the previously applied approximate modal derivatives. The correctness of the complete modal derivatives is illustrated by use of an example of a harmonically loaded kinematic nonlinear structure modeled by beam elements.Keywords: basis projection, finite element method, kinematic nonlinearities, modal derivatives
Procedia PDF Downloads 23716308 Stress Variation of Underground Building Structure during Top-Down Construction
Authors: Soo-yeon Seo, Seol-ki Kim, Su-jin Jung
Abstract:
In the construction of a building, it is necessary to minimize construction period and secure enough work space for stacking of materials during the construction especially in city area. In this manner, various top-down construction methods have been developed and widely used in Korea. This paper investigates the stress variation of underground structure of a building constructed by using SPS (Strut as Permanent System) known as a top-down method in Korea through an analytical approach. Various types of earth pressure distribution related to ground condition were considered in the structural analysis of an example structure at each step of the excavation. From the analysis, the most high member force acting on beams was found when the ground type was medium sandy soil and a stress concentration was found in corner area.Keywords: construction of building, top-down construction method, earth pressure distribution, member force, stress concentration
Procedia PDF Downloads 30716307 Method of Cluster Based Cross-Domain Knowledge Acquisition for Biologically Inspired Design
Authors: Shen Jian, Hu Jie, Ma Jin, Peng Ying Hong, Fang Yi, Liu Wen Hai
Abstract:
Biologically inspired design inspires inventions and new technologies in the field of engineering by mimicking functions, principles, and structures in the biological domain. To deal with the obstacles of cross-domain knowledge acquisition in the existing biologically inspired design process, functional semantic clustering based on functional feature semantic correlation and environmental constraint clustering composition based on environmental characteristic constraining adaptability are proposed. A knowledge cell clustering algorithm and the corresponding prototype system is developed. Finally, the effectiveness of the method is verified by the visual prosthetic device design.Keywords: knowledge clustering, knowledge acquisition, knowledge based engineering, knowledge cell, biologically inspired design
Procedia PDF Downloads 42616306 Farmers’ Use of Indigenous Knowledge System (IKS) for Selected Arable Crops Production in Ondo State
Authors: A. M. Omoare, E. O. Fakoya
Abstract:
This study sought to determine the use of indigenous knowledge for selected arable crops production in Ondo Sate. A multistage sampling method was used and 112 arable crops farmers were systematically selected. Data were analyzed using both descriptive and inferential statistics. The results showed that majority of the sampled farmers were male (75.90%) About 75% were married with children. Large proportion of them (62.61%) were within the ages of 30-49 years. Most of them have spent about 10 years in farming (58.92%). The highest raw scores of use of indigenous knowledge were found in planting on mound in yam production, use of native medicine and scare-crow method in controlling birds in rice production, timely planting of locally developed resistant varieties in cassava production and soaking of maize seeds in water to determine their viability with raw scores of 313, 310, 305, 303, and 300 respectively, while the lowest raw scores was obtained in use of bell method in controlling birds in rice production with raw scores of 210. The findings established that proverbs (59.8%) and taboos (55.36%) were the most commonly used media in transmitting indigenous knowledge by arable crop farmers. The multiple regression analysis result revealed that age of the farmers and farming experience had a significant relationship with the use of indigenous knowledge of the farmers which gave R2=0.83 for semi-log function form of equation which is the land equation. The policy implication is that indigenous knowledge should provide a basis for designing modern technologies to enhance sustainable agricultural development.Keywords: Arable Crop Production, extent of use, indigenous knowledge, farming experience
Procedia PDF Downloads 57116305 Automatic Identification of Pectoral Muscle
Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina
Abstract:
Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle
Procedia PDF Downloads 35016304 Earnings vs Cash Flows: The Valuation Perspective
Authors: Megha Agarwal
Abstract:
The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)
Procedia PDF Downloads 37616303 Improvement of Soft Clay Soil with Biopolymer
Authors: Majid Bagherinia
Abstract:
Lime and cement are frequently used as binders in the Deep Mixing Method (DMM) to improve soft clay soils. The most significant disadvantages of these materials are carbon dioxide emissions and the consumption of natural resources. In this study, three different biopolymers, guar gum, locust bean gum, and sodium alginate, were investigated for the improvement of soft clay using DMM. In the experimental study, the effects of the additive ratio and curing time on the Unconfined Compressive Strength (UCS) of stabilized specimens were investigated. According to the results, the UCS values of the specimens increased as the additive ratio and curing time increased. The most effective additive was sodium alginate, and the highest strength was obtained after 28 days.Keywords: deep mixing method, soft clays, ground improvement, biopolymers, unconfined compressive strength
Procedia PDF Downloads 8016302 Degradation of Neonicotinoid Insecticides (Acetamiprid and Imidacloprid) Using Biochar of Rice Husk and Fruit Peels
Authors: Mateen Abbas, Abdul Muqeet Khan, Sadia Bashir, Muhammad Awais Khalid, Aamir Ghafoor, Zara Hussain, Mashal Shahid
Abstract:
The irrational use of insecticides in everyday life has drawn attention worldwide towards its harmful effects. To mitigate the toxic effects of insecticides to humans, present study was planned on the degradation/detoxification of the neonicotinoid insecticides including imidacloprid and acetamiprid. Biocarbon of fruit peels (Banana & Watermelon) and biochar (activated or non-activated) of rice husk was utilized as adsorbents for degradation of selected pesticides. Both activated and non-activated biochar were prepared for treatment and then applied in different concentrations (0.5 to 2.0 ppm) and dosage (1.0 to 2.5g) to insecticides (Acetamiprid & Imidacloprid) as well as studied at different times (30-120 minutes). Reverse Phase-High Performance Liquid Chromatography (RP-HPLC) coupled with Photodiode array detector was used to quantify the insecticides. Results depicted that activated biochar of rice husk minimized the 73% concentrations of both insecticides however, watermelon activated biocarbon degraded 72% of imidacloprid and 56% of acetamiprid. Results proved the efficiency of the method employed and it was also inferred that high concentration of biocarbon resulted in larger percentage of degradation. The applied method is cheaper, easy and accessible that can be used to minimize the pesticide residues in animal feed. Degradation using biochar proved significant degradation, eco-friendly and economic method to reduce toxicity of insecticides.Keywords: insecticides, acetamiprid, imidacloprid, biochar, HPLC
Procedia PDF Downloads 15316301 Effects of Surface Roughness on a Unimorph Piezoelectric Micro-Electro-Mechanical Systems Vibrational Energy Harvester Using Finite Element Method Modeling
Authors: Jean Marriz M. Manzano, Marc D. Rosales, Magdaleno R. Vasquez Jr., Maria Theresa G. De Leon
Abstract:
This paper discusses the effects of surface roughness on a cantilever beam vibrational energy harvester. A silicon sample was fabricated using MEMS fabrication processes. When etching silicon using deep reactive ion etching (DRIE) at large etch depths, rougher surfaces are observed as a result of increased response in process pressure, amount of coil power and increased helium backside cooling readings. To account for the effects of surface roughness on the characteristics of the cantilever beam, finite element method (FEM) modeling was performed using actual roughness data from fabricated samples. It was found that when etching about 550um of silicon, root mean square roughness parameter, Sq, varies by 1 to 3 um (at 100um thick) across a 6-inch wafer. Given this Sq variation, FEM simulations predict an 8 to148 Hz shift in the resonant frequency while having no significant effect on the output power. The significant shift in the resonant frequency implies that careful consideration of surface roughness from fabrication processes must be done when designing energy harvesters.Keywords: deep reactive ion etching, finite element method, microelectromechanical systems, multiphysics analysis, surface roughness, vibrational energy harvester
Procedia PDF Downloads 12116300 Structural and Optical Characterization of Silica@PbS Core–Shell Nanoparticles
Authors: A. Pourahmad, Sh. Gharipour
Abstract:
The present work describes the preparation and characterization of nanosized SiO2@PbS core-shell particles by using a simple wet chemical route. This method utilizes silica spheres formation followed by successive ionic layer adsorption and reaction method assisted lead sulphide shell layer formation. The final product was characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), UV–vis spectroscopic, infrared spectroscopy (IR) and transmission electron microscopy (TEM) experiments. The morphological studies revealed the uniformity in size distribution with core size of 250 nm and shell thickness of 18 nm. The electron microscopic images also indicate the irregular morphology of lead sulphide shell layer. The structural studies indicate the face-centered cubic system of PbS shell with no other trace for impurities in the crystal structure.Keywords: core-shell, nanostructure, semiconductor, optical property, XRD
Procedia PDF Downloads 29916299 Theoretical Analysis of Photoassisted Field Emission near the Metal Surface Using Transfer Hamiltonian Method
Authors: Rosangliana Chawngthu, Ramkumar K. Thapa
Abstract:
A model calculation of photoassisted field emission current (PFEC) by using transfer Hamiltonian method will be present here. When the photon energy is incident on the surface of the metals, such that the energy of a photon is usually less than the work function of the metal under investigation. The incident radiation photo excites the electrons to a final state which lies below the vacuum level; the electrons are confined within the metal surface. A strong static electric field is then applied to the surface of the metal which causes the photoexcited electrons to tunnel through the surface potential barrier into the vacuum region and constitutes the considerable current called photoassisted field emission current. The incident radiation is usually a laser beam, causes the transition of electrons from the initial state to the final state and the matrix element for this transition will be written. For the calculation of PFEC, transfer Hamiltonian method is used. The initial state wavefunction is calculated by using Kronig-Penney potential model. The effect of the matrix element will also be studied. An appropriate dielectric model for the surface region of the metal will be used for the evaluation of vector potential. FORTRAN programme is used for the calculation of PFEC. The results will be checked with experimental data and the theoretical results.Keywords: photoassisted field emission, transfer Hamiltonian, vector potential, wavefunction
Procedia PDF Downloads 22616298 Validation of a Placebo Method with Potential for Blinding in Ultrasound-Guided Dry Needling
Authors: Johnson C. Y. Pang, Bo Peng, Kara K. L. Reeves, Allan C. L. Fud
Abstract:
Objective: Dry needling (DN) has long been used as a treatment method for various musculoskeletal pain conditions. However, the evidence level of the studies was low due to the limitations of the methodology. Lack of randomization and inappropriate blinding is potentially the main sources of bias. A method that can differentiate clinical results due to the targeted experimental procedure from its placebo effect is needed to enhance the validity of the trial. Therefore, this study aimed to validate the method as a placebo ultrasound(US)-guided DN for patients with knee osteoarthritis (KOA). Design: This is a randomized controlled trial (RCT). Ninety subjects (25 males and 65 females) aged between 51 and 80 (61.26 ± 5.57) with radiological KOA were recruited and randomly assigned into three groups with a computer program. Group 1 (G1) received real US-guided DN, Group 2 (G2) received placebo US-guided DN, and Group 3 (G3) was the control group. Both G1 and G2 subjects received the same procedure of US-guided DN, except the US monitor was turned off in G2, blinding the G2 subjects to the incorporation of faux US guidance. This arrangement created the placebo effect intended to permit comparison of their results to those who received actual US-guided DN. Outcome measures, including the visual analog scale (VAS) and Knee injury and Osteoarthritis Outcome Score (KOOS) subscales of pain, symptoms, and quality of life (QOL), were analyzed by repeated measures analysis of covariance (ANCOVA) for time effects and group effects. The data regarding the perception of receiving real US-guided DN or placebo US-guided DN were analyzed by the chi-squared test. The missing data were analyzed with the intention-to-treat (ITT) approach if more than 5% of the data were missing. Results: The placebo US-guided DN (G2) subjects had the same perceptions as the use of real US guidance in the advancement of DN (p<0.128). G1 had significantly higher pain reduction (VAS and KOOS-pain) than G2 and G3 at 8 weeks (both p<0.05) only. There was no significant difference between G2 and G3 at 8 weeks (both p>0.05). Conclusion: The method with the US monitor turned off during the application of DN is credible for blinding the participants and allowing researchers to incorporate faux US guidance. The validated placebo US-guided DN technique can aid in investigations of the effects of US-guided DN with short-term effects of pain reduction for patients with KOA. Acknowledgment: This work was supported by the Caritas Institute of Higher Education [grant number IDG200101].Keywords: ultrasound-guided dry needling, dry needling, knee osteoarthritis, physiotheraphy
Procedia PDF Downloads 12016297 A Study on the Performance of 2-PC-D Classification Model
Authors: Nurul Aini Abdul Wahab, Nor Syamim Halidin, Sayidatina Aisah Masnan, Nur Izzati Romli
Abstract:
There are many applications of principle component method for reducing the large set of variables in various fields. Fisher’s Discriminant function is also a popular tool for classification. In this research, the researcher focuses on studying the performance of Principle Component-Fisher’s Discriminant function in helping to classify rice kernels to their defined classes. The data were collected on the smells or odour of the rice kernel using odour-detection sensor, Cyranose. 32 variables were captured by this electronic nose (e-nose). The objective of this research is to measure how well a combination model, between principle component and linear discriminant, to be as a classification model. Principle component method was used to reduce all 32 variables to a smaller and manageable set of components. Then, the reduced components were used to develop the Fisher’s Discriminant function. In this research, there are 4 defined classes of rice kernel which are Aromatic, Brown, Ordinary and Others. Based on the output from principle component method, the 32 variables were reduced to only 2 components. Based on the output of classification table from the discriminant analysis, 40.76% from the total observations were correctly classified into their classes by the PC-Discriminant function. Indirectly, it gives an idea that the classification model developed has committed to more than 50% of misclassifying the observations. As a conclusion, the Fisher’s Discriminant function that was built on a 2-component from PCA (2-PC-D) is not satisfying to classify the rice kernels into its defined classes.Keywords: classification model, discriminant function, principle component analysis, variable reduction
Procedia PDF Downloads 33316296 The Effect of Body Positioning on Upper-Limb Arterial Occlusion Pressure and the Reliability of the Method during Blood Flow Restriction Training
Authors: Stefanos Karanasios, Charkleia Koutri, Maria Moutzouri, Sofia A. Xergia, Vasiliki Sakellari, George Gioftsos
Abstract:
The precise calculation of arterial occlusive pressure (AOP) is a critical step to accurately prescribe individualized pressures during blood flow restriction training (BFRT). AOP is usually measured in a supine position before training; however, previous reports suggested a significant influence in lower limb AOP across different body positions. The aim of the study was to investigate the effect of three different body positions on upper limb AOP and the reliability of the method for its standardization in clinical practice. Forty-two healthy participants (Mean age: 28.1, SD: ±7.7) underwent measurements of upper limb AOP in supine, seated, and standing positions by three blinded raters. A cuff with a manual pump and a pocket doppler ultrasound were used. A significantly higher upper limb AOP was found in seated compared with supine position (p < 0.031) and in supine compared with standing position (p < 0.031) by all raters. An excellent intraclass correlation coefficient (0.858- 0.984, p < 0.001) was found in all positions. Upper limb AOP is strongly dependent on body position changes. The appropriate measurement position should be selected to accurately calculate AOP before BFRT. The excellent inter-rater reliability and repeatability of the method suggest reliable and consistent results across repeated measurements.Keywords: Kaatsu training, blood flow restriction training, arterial occlusion, reliability
Procedia PDF Downloads 21316295 High-Yield Synthesis of Nanohybrid Shish-Kebab of Polyethylene on Carbon NanoFillers
Authors: Dilip Depan, Austin Simoneaux, William Chirdon, Ahmed Khattab
Abstract:
In this study, we present a novel approach to synthesize polymer nanocomposites with nanohybrid shish-kebab architecture (NHSK). For this low-density and high density polyethylene (PE) was crystallized on various carbon nano-fillers using a novel and convenient method to prepare high-yield NHSK. Polymer crystals grew epitaxially on carbon nano-fillers using a solution crystallization method. The mixture of polymer and carbon fillers in xylene was flocculated and precipitated in ethanol to improve the product yield. Carbon nanofillers of varying diameter were also used as a nucleating template for polymer crystallization. The morphology of the prepared nanocomposites was characterized scanning electron microscopy (SEM), while differential scanning calorimetry (DSC) was used to quantify the amount of crystalline polymer. Interestingly, whatever the diameter of the carbon nanofiller is, the lamellae of PE is always perpendicular to the long axis of nanofiller. Surface area analysis was performed using BET. Our results indicated that carbon nanofillers of varying diameter can be used to effectively nucleate the crystallization of polymer. The effect of molecular weight and concentration of the polymer was discussed on the basis of chain mobility and crystallization capability of the polymer matrix. Our work shows a facile, rapid, yet high-yield production method to form polymer nanocomposites to reveal application potential of NHSK architecture.Keywords: carbon nanotubes, polyethylene, nanohybrid shish-kebab, crystallization, morphology
Procedia PDF Downloads 32916294 Deep Feature Augmentation with Generative Adversarial Networks for Class Imbalance Learning in Medical Images
Authors: Rongbo Shen, Jianhua Yao, Kezhou Yan, Kuan Tian, Cheng Jiang, Ke Zhou
Abstract:
This study proposes a generative adversarial networks (GAN) framework to perform synthetic sampling in feature space, i.e., feature augmentation, to address the class imbalance problem in medical image analysis. A feature extraction network is first trained to convert images into feature space. Then the GAN framework incorporates adversarial learning to train a feature generator for the minority class through playing a minimax game with a discriminator. The feature generator then generates features for minority class from arbitrary latent distributions to balance the data between the majority class and the minority class. Additionally, a data cleaning technique, i.e., Tomek link, is employed to clean up undesirable conflicting features introduced from the feature augmentation and thus establish well-defined class clusters for the training. The experiment section evaluates the proposed method on two medical image analysis tasks, i.e., mass classification on mammogram and cancer metastasis classification on histopathological images. Experimental results suggest that the proposed method obtains superior or comparable performance over the state-of-the-art counterparts. Compared to all counterparts, our proposed method improves more than 1.5 percentage of accuracy.Keywords: class imbalance, synthetic sampling, feature augmentation, generative adversarial networks, data cleaning
Procedia PDF Downloads 12716293 An Integrated Architecture of E-Learning System to Digitize the Learning Method
Authors: M. Touhidul Islam Sarker, Mohammod Abul Kashem
Abstract:
The purpose of this paper is to improve the e-learning system and digitize the learning method in the educational sector. The learner will login into e-learning platform and easily access the digital content, the content can be downloaded and take an assessment for evaluation. Learner can get access to these digital resources by using tablet, computer, and smart phone also. E-learning system can be defined as teaching and learning with the help of multimedia technologies and the internet by access to digital content. E-learning replacing the traditional education system through information and communication technology-based learning. This paper has designed and implemented integrated e-learning system architecture with University Management System. Moodle (Modular Object-Oriented Dynamic Learning Environment) is the best e-learning system, but the problem of Moodle has no school or university management system. In this research, we have not considered the school’s student because they are out of internet facilities. That’s why we considered the university students because they have the internet access and used technologies. The University Management System has different types of activities such as student registration, account management, teacher information, semester registration, staff information, etc. If we integrated these types of activity or module with Moodle, then we can overcome the problem of Moodle, and it will enhance the e-learning system architecture which makes effective use of technology. This architecture will give the learner to easily access the resources of e-learning platform anytime or anywhere which digitizes the learning method.Keywords: database, e-learning, LMS, Moodle
Procedia PDF Downloads 18816292 An Improved Two-dimensional Ordered Statistical Constant False Alarm Detection
Authors: Weihao Wang, Zhulin Zong
Abstract:
Two-dimensional ordered statistical constant false alarm detection is a widely used method for detecting weak target signals in radar signal processing applications. The method is based on analyzing the statistical characteristics of the noise and clutter present in the radar signal and then using this information to set an appropriate detection threshold. In this approach, the reference cell of the unit to be detected is divided into several reference subunits. These subunits are used to estimate the noise level and adjust the detection threshold, with the aim of minimizing the false alarm rate. By using an ordered statistical approach, the method is able to effectively suppress the influence of clutter and noise, resulting in a low false alarm rate. The detection process involves a number of steps, including filtering the input radar signal to remove any noise or clutter, estimating the noise level based on the statistical characteristics of the reference subunits, and finally, setting the detection threshold based on the estimated noise level. One of the main advantages of two-dimensional ordered statistical constant false alarm detection is its ability to detect weak target signals in the presence of strong clutter and noise. This is achieved by carefully analyzing the statistical properties of the signal and using an ordered statistical approach to estimate the noise level and adjust the detection threshold. In conclusion, two-dimensional ordered statistical constant false alarm detection is a powerful technique for detecting weak target signals in radar signal processing applications. By dividing the reference cell into several subunits and using an ordered statistical approach to estimate the noise level and adjust the detection threshold, this method is able to effectively suppress the influence of clutter and noise and maintain a low false alarm rate.Keywords: two-dimensional, ordered statistical, constant false alarm, detection, weak target signals
Procedia PDF Downloads 7816291 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning
Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic
Abstract:
Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method
Procedia PDF Downloads 24916290 Frequency Modulation in Vibro-Acoustic Modulation Method
Authors: D. Liu, D. M. Donskoy
Abstract:
The vibroacoustic modulation method is based on the modulation effect of high-frequency ultrasonic wave (carrier) by low-frequency vibration in the presence of various defects, primarily contact-type such as cracks, delamination, etc. The presence and severity of the defect are measured by the ratio of the spectral sidebands and the carrier in the spectrum of the modulated signal. This approach, however, does not differentiate between amplitude and frequency modulations, AM and FM, respectfully. It was experimentally shown that both modulations could be present in the spectrum, yet each modulation may be associated with different physical mechanisms. AM mechanisms are quite well understood and widely covered in the literature. This paper is a first attempt to explain the generation mechanisms of FM and its correlation with the flaw properties. Here we proposed two possible mechanisms leading to FM modulation based on nonlinear local defect resonance and dynamic acousto-elastic models.Keywords: non-destructive testing, nonlinear acoustics, structural health monitoring, acousto-elasticity, local defect resonance
Procedia PDF Downloads 15316289 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration
Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu
Abstract:
Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery
Procedia PDF Downloads 13116288 Geospatial Land Suitability Modeling for Biofuel Crop Using AHP
Authors: Naruemon Phongaksorn
Abstract:
The biofuel consumption has increased significantly over the decade resulting in the increasing request on agricultural land for biofuel feedstocks. However, the biofuel feedstocks are already stressed of having low productivity owing to inappropriate agricultural practices without considering suitability of crop land. This research evaluates the land suitability using GIS-integrated Analytic Hierarchy Processing (AHP) of biofuel crops: cassava, at Chachoengsao province, in Thailand. AHP method that has been widely accepted for land use planning. The objective of this study is compared between AHP method and the most limiting group of land characteristics method (classical approach). The reliable results of the land evaluation were tested against the crop performance assessed by the field investigation in 2015. In addition to the socio-economic land suitability, the expected availability of raw materials for biofuel production to meet the local biofuel demand, are also estimated. The results showed that the AHP could classify and map the physical land suitability with 10% higher overall accuracy than the classical approach. The Chachoengsao province showed high and moderate socio-economic land suitability for cassava. Conditions in the Chachoengsao province were also favorable for cassava plantation, as the expected raw material needed to support ethanol production matched that of ethanol plant capacity of this province. The GIS integrated AHP for biofuel crops land suitability evaluation appears to be a practical way of sustainably meeting biofuel production demand.Keywords: Analytic Hierarchy Processing (AHP), Cassava, Geographic Information Systems, Land suitability
Procedia PDF Downloads 20116287 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure
Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik
Abstract:
Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT
Procedia PDF Downloads 12816286 Frailty and Quality of Life among Older Adults: A Study of Six LMICs Using SAGE Data
Authors: Mamta Jat
Abstract:
Background: The increased longevity has resulted in the increase in the percentage of the global population aged 60 years or over. With this “demographic transition” towards ageing, “epidemiologic transition” is also taking place characterised by growing share of non-communicable diseases in the overall disease burden. So, many of the older adults are ageing with chronic disease and high levels of frailty which often results in lower levels of quality of life. Although frailty may be increasingly common in older adults, prevention or, at least, delay the onset of late-life adverse health outcomes and disability is necessary to maintain the health and functional status of the ageing population. This is an effort using SAGE data to assess levels of frailty and its socio-demographic correlates and its relation with quality of life in LMICs of India, China, Ghana, Mexico, Russia and South Africa in a comparative perspective. Methods: The data comes from multi-country Study on Global AGEing and Adult Health (SAGE), consists of nationally representative samples of older adults in six low and middle-income countries (LMICs): China, Ghana, India, Mexico, the Russian Federation and South Africa. For our study purpose, we will consider only 50+ year’s respondents. The logistic regression model has been used to assess the correlates of frailty. Multinomial logistic regression has been used to study the effect of frailty on QOL (quality of life), controlling for the effect of socio-economic and demographic correlates. Results: Among all the countries India is having highest mean frailty in males (0.22) and females (0.26) and China with the lowest mean frailty in males (0.12) and females (0.14). The odds of being frail are more likely with the increase in age across all the countries. In India, China and Russia the chances of frailty are more among rural older adults; whereas, in Ghana, South Africa and Mexico rural residence is protecting against frailty. Among all countries china has high percentage (71.46) of frail people in low QOL; whereas Mexico has lowest percentage (36.13) of frail people in low QOL.s The risk of having low and middle QOL is significantly (p<0.001) higher among frail elderly as compared to non–frail elderly across all countries with controlling socio-demographic correlates. Conclusion: Women and older age groups are having higher frailty levels than men and younger aged adults in LMICs. The mean frailty scores demonstrated a strong inverse relationship with education and income gradients, while lower levels of education and wealth are showing higher levels of frailty. These patterns are consistent across all LMICs. These data support a significant role of frailty with all other influences controlled, in having low QOL as measured by WHOQOL index. Future research needs to be built on this evolving concept of frailty in an effort to improve quality of life for frail elderly population, in LMICs setting.Keywords: Keywords: Ageing, elderly, frailty, quality of life
Procedia PDF Downloads 28816285 Investigation on Cost Reflective Network Pricing and Modified Cost Reflective Network Pricing Methods for Transmission Service Charges
Authors: K. Iskandar, N. H. Radzi, R. Aziz, M. S. Kamaruddin, M. N. Abdullah, S. A. Jumaat
Abstract:
Nowadays many developing countries have been undergoing a restructuring process in the power electricity industry. This process has involved disaggregating former state-owned monopoly utilities both vertically and horizontally and introduced competition. The restructuring process has been implemented by the Australian National Electricity Market (NEM) started from 13 December 1998, began operating as a wholesale market for supply of electricity to retailers and end-users in Queensland, New South Wales, the Australian Capital Territory, Victoria and South Australia. In this deregulated market, one of the important issues is the transmission pricing. Transmission pricing is a service that recovers existing and new cost of the transmission system. The regulation of the transmission pricing is important in determining whether the transmission service system is economically beneficial to both side of the users and utilities. Therefore, an efficient transmission pricing methodology plays an important role in the Australian NEM. In this paper, the transmission pricing methodologies that have been implemented by the Australian NEM which are the Cost Reflective Network Pricing (CRNP) and Modified Cost Reflective Network Pricing (MCRNP) methods are investigated for allocating the transmission service charges to the transmission users. A case study using 6-bus system is used in order to identify the best method that reflects a fair and equitable transmission service charge.Keywords: cost-reflective network pricing method, modified cost-reflective network pricing method, restructuring process, transmission pricing
Procedia PDF Downloads 44516284 A Fast Optimizer for Large-scale Fulfillment Planning based on Genetic Algorithm
Authors: Choonoh Lee, Seyeon Park, Dongyun Kang, Jaehyeong Choi, Soojee Kim, Younggeun Kim
Abstract:
Market Kurly is the first South Korean online grocery retailer that guarantees same-day, overnight shipping. More than 1.6 million customers place an average of 4.7 million orders and add 3 to 14 products into a cart per month. The company has sold almost 30,000 kinds of various products in the past 6 months, including food items, cosmetics, kitchenware, toys for kids/pets, and even flowers. The company is operating and expanding multiple dry, cold, and frozen fulfillment centers in order to store and ship these products. Due to the scale and complexity of the fulfillment, pick-pack-ship processes are planned and operated in batches, and thus, the planning that decides the batch of the customers’ orders is a critical factor in overall productivity. This paper introduces a metaheuristic optimization method that reduces the complexity of batch processing in a fulfillment center. The method is an iterative genetic algorithm with heuristic creation and evolution strategies; it aims to group similar orders into pick-pack-ship batches to minimize the total number of distinct products. With a well-designed approach to create initial genes, the method produces streamlined plans, up to 13.5% less complex than the actual plans carried out in the company’s fulfillment centers in the previous months. Furthermore, our digital-twin simulations show that the optimized plans can reduce 3% of operation time for packing, which is the most complex and time-consuming task in the process. The optimization method implements a multithreading design on the Spring framework to support the company’s warehouse management systems in near real-time, finding a solution for 4,000 orders within 5 to 7 seconds on an AWS c5.2xlarge instance.Keywords: fulfillment planning, genetic algorithm, online grocery retail, optimization
Procedia PDF Downloads 8316283 Lossless Secret Image Sharing Based on Integer Discrete Cosine Transform
Authors: Li Li, Ahmed A. Abd El-Latif, Aya El-Fatyany, Mohamed Amin
Abstract:
This paper proposes a new secret image sharing method based on integer discrete cosine transform (IntDCT). It first transforms the original image into the frequency domain (DCT coefficients) using IntDCT, which are operated on each block with size 8*8. Then, it generates shares among each DCT coefficients in the same place of each block, that is, all the DC components are used to generate DC shares, the ith AC component in each block are utilized to generate ith AC shares, and so on. The DC and AC shares components with the same number are combined together to generate DCT shadows. Experimental results and analyses show that the proposed method can recover the original image lossless than those methods based on traditional DCT and is more sensitive to tiny change in both the coefficients and the content of the image.Keywords: secret image sharing, integer DCT, lossless recovery, sensitivity
Procedia PDF Downloads 39816282 Mechanical Behavior of Recycled Mortars Manufactured from Moisture Correction Using the Halogen Light Thermogravimetric Balance as an Alternative to the Traditional ASTM C 128 Method
Authors: Diana Gomez-Cano, J. C. Ochoa-Botero, Roberto Bernal Correa, Yhan Paul Arias
Abstract:
To obtain high mechanical performance, the fresh conditions of a mortar are decisive. Measuring the absorption of aggregates used in mortar mixes is a fundamental requirement for proper design of the mixes prior to their placement in construction sites. In this sense, absorption is a determining factor in the design of a mix because it conditions the amount of water, which in turn affects the water/cement ratio and the final porosity of the mortar. Thus, this work focuses on the mechanical behavior of recycled mortars manufactured from moisture correction using the Thermogravimetric Balancing Halogen Light (TBHL) technique in comparison with the traditional ASTM C 128 International Standard method. The advantages of using the TBHL technique are favorable in terms of reduced consumption of resources such as materials, energy, and time. The results show that in contrast to the ASTM C 128 method, the TBHL alternative technique allows obtaining a higher precision in the absorption values of recycled aggregates, which is reflected not only in a more efficient process in terms of sustainability in the characterization of construction materials but also in an effect on the mechanical performance of recycled mortars.Keywords: alternative raw materials, halogen light, recycled mortar, resources optimization, water absorption
Procedia PDF Downloads 11416281 Computer-Aided Detection of Simultaneous Abdominal Organ CT Images by Iterative Watershed Transform
Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid
Abstract:
Interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis applications. Segmentation of liver, spleen and kidneys is regarded as a major primary step in the computer-aided diagnosis of abdominal organ diseases. In this paper, a semi-automated method for medical image data is presented for the abdominal organ segmentation data using mathematical morphology. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. Our algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter, we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, simultaneous organ segmentation, the watershed algorithm
Procedia PDF Downloads 44116280 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence
Authors: Getaneh Berie Tarekegn
Abstract:
A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 75