Search results for: soxhlet extraction method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19809

Search results for: soxhlet extraction method

18759 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 237
18758 Solid Phase Micro-Extraction/Gas Chromatography-Mass Spectrometry Study of Volatile Compounds from Strawberry Tree and Autumn Heather Honeys

Authors: Marinos Xagoraris, Elisavet Lazarou, Eleftherios Alissandrakis, Christos S. Pappas, Petros A. Tarantilis

Abstract:

Strawberry tree (Arbutus unedo L.) and autumn heather (Erica manipuliflora Salisb.) are important beekeeping plants of Greece. Six monofloral honeys (four strawberry tree, two autumn heather) were analyzed by means of Solid Phase Micro-Extraction (SPME, 60 min, 60 oC) followed by Gas Chromatography coupled to Mass Spectrometry (GC-MS) for the purpose of assessing the botanical origin. A Divinylbenzene/Carboxen/Polydimethylsiloxane (DVB/CAR/PDMS) fiber was employed, and benzophenone was used as internal standard. The volatile compounds with higher concentration (μg/ g of honey expressed as benzophenone) from strawberry tree honey samples, were α-isophorone (2.50-8.12); 3,4,5-trimethyl-phenol (0.20-4.62); 2-hydroxy-isophorone (0.06-0.53); 4-oxoisophorone (0.38-0.46); and β-isophorone (0.02-0.43). Regarding heather honey samples, the most abundant compounds were 1-methoxy-4-propyl-benzene (1.22-1.40); p-anisaldehyde (0.97-1.28); p-anisic acid (0.35-0.58); 2-furaldehyde (0.52-0.57); and benzaldehyde (0.41-0.56). Norisoprenoids are potent floral markers for strawberry-tree honey. β-isophorone is found exclusively in the volatile fraction of this type of honey, while also α-isophorone, 4-oxoisophorone and 2-hydroxy-isophorone could be considered as additional marker compounds. The analysis of autumn heather honey revealed that phenolic compounds are the most abundant and p-anisaldehyde; 1-methoxy-4-propyl-benzene; and p-anisic acid could serve as potent marker compounds. In conclusion, marker compounds for the determination of the botanical origin for these honeys could be identified as several norisoprenoids and phenolic components were found exclusively or in higher concentrations compared to common Greek honey varieties.

Keywords: SPME/GC-MS, volatile compounds, heather honey, strawberry tree honey

Procedia PDF Downloads 178
18757 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton

Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani

Abstract:

Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.

Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton

Procedia PDF Downloads 308
18756 Morphology Operation and Discrete Wavelet Transform for Blood Vessels Segmentation in Retina Fundus

Authors: Rita Magdalena, N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Sofia Saidah, Bima Sakti

Abstract:

Vessel segmentation of retinal fundus is important for biomedical sciences in diagnosing ailments related to the eye. Segmentation can simplify medical experts in diagnosing retinal fundus image state. Therefore, in this study, we designed a software using MATLAB which enables the segmentation of the retinal blood vessels on retinal fundus images. There are two main steps in the process of segmentation. The first step is image preprocessing that aims to improve the quality of the image to be optimum segmented. The second step is the image segmentation in order to perform the extraction process to retrieve the retina’s blood vessel from the eye fundus image. The image segmentation methods that will be analyzed in this study are Morphology Operation, Discrete Wavelet Transform and combination of both. The amount of data that used in this project is 40 for the retinal image and 40 for manually segmentation image. After doing some testing scenarios, the average accuracy for Morphology Operation method is 88.46 % while for Discrete Wavelet Transform is 89.28 %. By combining the two methods mentioned in later, the average accuracy was increased to 89.53 %. The result of this study is an image processing system that can segment the blood vessels in retinal fundus with high accuracy and low computation time.

Keywords: discrete wavelet transform, fundus retina, morphology operation, segmentation, vessel

Procedia PDF Downloads 179
18755 Implementation of a Method of Crater Detection Using Principal Component Analysis in FPGA

Authors: Izuru Nomura, Tatsuya Takino, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata

Abstract:

We propose a method of crater detection from the image of the lunar surface captured by the small space probe. We use the principal component analysis (PCA) to detect craters. Nevertheless, considering severe environment of the space, it is impossible to use generic computer in practice. Accordingly, we have to implement the method in FPGA. This paper compares FPGA and generic computer by the processing time of a method of crater detection using principal component analysis.

Keywords: crater, PCA, eigenvector, strength value, FPGA, processing time

Procedia PDF Downloads 535
18754 A Q-Methodology Approach for the Evaluation of Land Administration Mergers

Authors: Tsitsi Nyukurayi Muparari, Walter Timo De Vries, Jaap Zevenbergen

Abstract:

The nature of Land administration accommodates diversity in terms of both spatial data handling activities and the expertise involved, which supposedly aims to satisfy the unpredictable demands of land data and the diverse demands of the customers arising from the land. However, it is known that strategic decisions of restructuring are in most cases repelled in favour of complex structures that strive to accommodate professional diversity and diverse roles in the field of Land administration. Yet despite of this widely accepted knowledge, there is scanty theoretical knowledge concerning the psychological methodologies that can extract the deeper perceptions from the diverse spatial expertise in order to explain the invisible control arm of the polarised reception of the ideas of change. This paper evaluates Q methodology in the context of a cadastre and land registry merger (under one agency) using the Swedish cadastral system as a case study. Precisely, the aim of this paper is to evaluate the effectiveness of Q methodology towards modelling the diverse psychological perceptions of spatial professionals who are in a widely contested decision of merging the cadastre and land registry components of Land administration using the Swedish cadastral system as a case study. An empirical approach that is prescribed by Q methodology starts with the concourse development, followed by the design of statements and q sort instrument, selection of the participants, the q-sorting exercise, factor extraction by PQMethod and finally narrative development by logic of abduction. The paper uses 36 statements developed from a dominant competing value theory that stands out on its reliability and validity, purposively selects 19 participants to do the Qsorting exercise, proceeds with factor extraction from the diversity using varimax rotation and judgemental rotation provided by PQMethod and effect the narrative construction using the logic abduction. The findings from the diverse perceptions from cadastral professionals in the merger decision of land registry and cadastre components in Sweden’s mapping agency (Lantmäteriet) shows that focus is rather inclined on the perfection of the relationship between the legal expertise and technical spatial expertise. There is much emphasis on tradition, loyalty and communication attributes which concern the organisation’s internal environment rather than innovation and market attributes that reveals customer behavior and needs arising from the changing humankind-land needs. It can be concluded that Q methodology offers effective tools that pursues a psychological approach for the evaluation and gradations of the decisions of strategic change through extracting the local perceptions of spatial expertise.

Keywords: cadastre, factor extraction, land administration merger, land registry, q-methodology, rotation

Procedia PDF Downloads 175
18753 MapReduce Logistic Regression Algorithms with RHadoop

Authors: Byung Ho Jung, Dong Hoon Lim

Abstract:

Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.

Keywords: big data, logistic regression, MapReduce, RHadoop

Procedia PDF Downloads 258
18752 An Optimized Method for 3D Magnetic Navigation of Nanoparticles inside Human Arteries

Authors: Evangelos G. Karvelas, Christos Liosis, Andreas Theodorakakos, Theodoros E. Karakasidis

Abstract:

In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.

Keywords: computational fluid dynamics, CFD, covariance matrix adaptation evolution strategy, discrete element method, DEM, magnetic navigation, spherical particles

Procedia PDF Downloads 123
18751 Effect of Type of Pile and Its Installation Method on Pile Bearing Capacity by Physical Modelling in Frustum Confining Vessel

Authors: Seyed Abolhasan Naeini, M. Mortezaee

Abstract:

Various factors such as the method of installation, the pile type, the pile material and the pile shape, can affect the final bearing capacity of a pile executed in the soil; among them, the method of installation is of special importance. The physical modeling is among the best options in the laboratory study of the piles behavior. Therefore, the current paper first presents and reviews the frustum confining vesel (FCV) as a suitable tool for physical modeling of deep foundations. Then, by describing the loading tests of two open-ended and closed-end steel piles, each of which has been performed in two methods, “with displacement" and "without displacement", the effect of end conditions and installation method on the final bearing capacity of the pile is investigated. The soil used in the current paper is silty sand of Firoozkooh. The results of the experiments show that in general the without displacement installation method has a larger bearing capacity in both piles, and in a specific method of installation the closed ended pile shows a slightly higher bearing capacity.

Keywords: physical modeling, frustum confining vessel, pile, bearing capacity, installation method

Procedia PDF Downloads 134
18750 Seismic Fragility Functions of RC Moment Frames Using Incremental Dynamic Analyses

Authors: Seung-Won Lee, JongSoo Lee, Won-Jik Yang, Hyung-Joon Kim

Abstract:

A capacity spectrum method (CSM), one of methodologies to evaluate seismic fragilities of building structures, has been long recognized as the most convenient method, even if it contains several limitations to predict the seismic response of structures of interest. This paper proposes the procedure to estimate seismic fragility curves using an incremental dynamic analysis (IDA) rather than the method adopting a CSM. To achieve the research purpose, this study compares the seismic fragility curves of a 5-story reinforced concrete (RC) moment frame obtained from both methods, an IDA method and a CSM. Both seismic fragility curves are similar in slight and moderate damage states whereas the fragility curve obtained from the IDA method presents less variation (or uncertainties) in extensive and complete damage states. This is due to the fact that the IDA method can properly capture the structural response beyond yielding rather than the CSM and can directly calculate higher mode effects. From these observations, the CSM could overestimate seismic vulnerabilities of the studied structure in extensive or complete damage states.

Keywords: seismic fragility curve, incremental dynamic analysis, capacity spectrum method, reinforced concrete moment frame

Procedia PDF Downloads 408
18749 Assessing Acute Toxicity and Endocrine Disruption Potential of Selected Packages Internal Layers Extracts

Authors: N. Szczepanska, B. Kudlak, G. Yotova, S. Tsakovski, J. Namiesnik

Abstract:

In the scientific literature related to the widely understood issue of packaging materials designed to have contact with food (food contact materials), there is much information on raw materials used for their production, as well as their physiochemical properties, types, and parameters. However, not much attention is given to the issues concerning migration of toxic substances from packaging and its actual influence on the health of the final consumer, even though health protection and food safety are the priority tasks. The goal of this study was to estimate the impact of particular foodstuff packaging type, food production, and storage conditions on the degree of leaching of potentially toxic compounds and endocrine disruptors to foodstuffs using the acute toxicity test Microtox and XenoScreen YES YAS assay. The selected foodstuff packaging materials were metal cans used for fish storage and tetrapak. Five stimulants respectful to specific kinds of food were chosen in order to assess global migration: distilled water for aqueous foods with a pH above 4.5; acetic acid at 3% in distilled water for acidic aqueous food with pH below 4.5; ethanol at 5% for any food that may contain alcohol; dimethyl sulfoxide (DMSO) and artificial saliva were used in regard to the possibility of using it as an simulation medium. For each packaging three independent variables (temperature and contact time) factorial design simulant was performed. Xenobiotics migration from epoxy resins was studied at three different temperatures (25°C, 65°C, and 121°C) and extraction time of 12h, 48h and 2 weeks. Such experimental design leads to 9 experiments for each food simulant as conditions for each experiment are obtained by combination of temperature and contact time levels. Each experiment was run in triplicate for acute toxicity and in duplicate for estrogen disruption potential determination. Multi-factor analysis of variation (MANOVA) was used to evaluate the effects of the three main factors solvent, temperature (temperature regime for cup), contact time and their interactions on the respected dependent variable (acute toxicity or estrogen disruption potential). From all stimulants studied the most toxic were can and tetrapak lining acetic acid extracts that are indication for significant migration of toxic compounds. This migration increased with increase of contact time and temperature and justified the hypothesis that food products with low pH values cause significant damage internal resin filling. Can lining extracts of all simulation medias excluding distilled water and artificial saliva proved to contain androgen agonists even at 25°C and extraction time of 12h. For tetrapak extracts significant endocrine potential for acetic acid, DMSO and saliva were detected.

Keywords: food packaging, extraction, migration, toxicity, biotest

Procedia PDF Downloads 165
18748 Approximations of Fractional Derivatives and Its Applications in Solving Non-Linear Fractional Variational Problems

Authors: Harendra Singh, Rajesh Pandey

Abstract:

The paper presents a numerical method based on operational matrix of integration and Ryleigh method for the solution of a class of non-linear fractional variational problems (NLFVPs). Chebyshev first kind polynomials are used for the construction of operational matrix. Using operational matrix and Ryleigh method the NLFVP is converted into a system of non-linear algebraic equations, and solving these equations we obtained approximate solution for NLFVPs. Convergence analysis of the proposed method is provided. Numerical experiment is done to show the applicability of the proposed numerical method. The obtained numerical results are compared with exact solution and solution obtained from Chebyshev third kind. Further the results are shown graphically for different fractional order involved in the problems.

Keywords: non-linear fractional variational problems, Rayleigh-Ritz method, convergence analysis, error analysis

Procedia PDF Downloads 279
18747 Microfluidized Fiber Based Oleogels for Encapsulation of Lycopene

Authors: Behic Mert

Abstract:

This study reports a facile approach to structure soft solids from microfluidizer lycopene-rich plant based structure and oil. First carotenoid-rich plant material (pumpkin was used in this study) processed with high-pressure microfluidizer to release lycopene molecules, then an emulsion was formed by mixing processed plant material and oil. While, in emulsion state lipid soluble carotenoid molecules were allowed to dissolve in the oil phase, the fiber material of plant material provided the network which was required for emulsion stabilization. Additional hydrocolloids (gelatin, xhantan, and pectin) up to 0.5% were also used to reinforce the emulsion stability and their impact on final product properties were evaluated via rheological, textural and oxidation studies. Finally, water was removed from emulsion phase by drying in a tray dryer at 40°C for 36 hours, and subsequent shearing resulted in soft solid (ole gel) structures. The microstructure of these systems was revealed by cryo-scanning electron microscopy. Effect of hydrocolloids on total lycopene and surface lycopene contents were also evaluated. The surface lycopene was lowest in gelatin containing oleo gels and highest in pectin-containing oleo gels. This study outlines the novel emulsion-based structuring method that can be used to encapsulate lycopene without the need of separate extraction of them.

Keywords: lycopene, encapsulation, fiber, oleo gel

Procedia PDF Downloads 248
18746 An Approximation Method for Exact Boundary Controllability of Euler-Bernoulli

Authors: A. Khernane, N. Khelil, L. Djerou

Abstract:

The aim of this work is to study the numerical implementation of the Hilbert uniqueness method for the exact boundary controllability of Euler-Bernoulli beam equation. This study may be difficult. This will depend on the problem under consideration (geometry, control, and dimension) and the numerical method used. Knowledge of the asymptotic behaviour of the control governing the system at time T may be useful for its calculation. This idea will be developed in this study. We have characterized as a first step the solution by a minimization principle and proposed secondly a method for its resolution to approximate the control steering the considered system to rest at time T.

Keywords: boundary control, exact controllability, finite difference methods, functional optimization

Procedia PDF Downloads 327
18745 Online Battery Equivalent Circuit Model Estimation on Continuous-Time Domain Using Linear Integral Filter Method

Authors: Cheng Zhang, James Marco, Walid Allafi, Truong Q. Dinh, W. D. Widanage

Abstract:

Equivalent circuit models (ECMs) are widely used in battery management systems in electric vehicles and other battery energy storage systems. The battery dynamics and the model parameters vary under different working conditions, such as different temperature and state of charge (SOC) levels, and therefore online parameter identification can improve the modelling accuracy. This paper presents a way of online ECM parameter identification using a continuous time (CT) estimation method. The CT estimation method has several advantages over discrete time (DT) estimation methods for ECM parameter identification due to the widely separated battery dynamic modes and fast sampling. The presented method can be used for online SOC estimation. Test data are collected using a lithium ion cell, and the experimental results show that the presented CT method achieves better modelling accuracy compared with the conventional DT recursive least square method. The effectiveness of the presented method for online SOC estimation is also verified on test data.

Keywords: electric circuit model, continuous time domain estimation, linear integral filter method, parameter and SOC estimation, recursive least square

Procedia PDF Downloads 362
18744 Hiveopolis - Honey Harvester System

Authors: Erol Bayraktarov, Asya Ilgun, Thomas Schickl, Alexandre Campo, Nicolis Stamatios

Abstract:

Traditional means of harvesting honey are often stressful for honeybees. Each time honey is collected a portion of the colony can die. In consequence, the colonies’ resilience to environmental stressors will decrease and this ultimately contributes to the global problem of honeybee colony losses. As part of the project HIVEOPOLIS, we design and build a different kind of beehive, incorporating technology to reduce negative impacts of beekeeping procedures, including honey harvesting. A first step in maintaining more sustainable honey harvesting practices is to design honey storage frames that can automate the honey collection procedures. This way, beekeepers save time, money, and labor by not having to open the hive and remove frames, and the honeybees' nest stays undisturbed.This system shows promising features, e.g., high reliability which could be a key advantage compared to current honey harvesting technologies.Our original concept of fractional honey harvesting has been to encourage the removal of honey only from "safe" locations and at levels that would leave the bees enough high-nutritional-value honey. In this abstract, we describe the current state of our honey harvester, its technology and areas to improve. The honey harvester works by separating the honeycomb cells away from the comb foundation; the movement and the elastic nature of honey supports this functionality. The honey sticks to the foundation, because of the surface tension forces amplified by the geometry. In the future, by monitoring the weight and therefore the capped honey cells on our honey harvester frames, we will be able to remove honey as soon as the weight measuring system reports that the comb is ready for harvesting. Higher viscosity honey or crystalized honey cause challenges in temperate locations when a smooth flow of honey is required. We use resistive heaters to soften the propolis and wax to unglue the moving parts during extraction. These heaters can also melt the honey slightly to the needed flow state. Precise control of these heaters allows us to operate the device for several purposes. We use ‘Nitinol’ springs that are activated by heat as an actuation method. Unlike conventional stepper or servo motors, which we also evaluated throughout development, the springs and heaters take up less space and reduce the overall system complexity. Honeybee acceptance was unknown until we actually inserted a device inside a hive. We not only observed bees walking on the artificial comb but also building wax, filling gaps with propolis and storing honey. This also shows that bees don’t mind living in spaces and hives built from 3D printed materials. We do not have data yet to prove that the plastic materials do not affect the chemical composition of the honey. We succeeded in automatically extracting stored honey from the device, demonstrating a useful extraction flow and overall effective operation this way.

Keywords: honey harvesting, honeybee, hiveopolis, nitinol

Procedia PDF Downloads 93
18743 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: artificial neural networks, ANN, discrete wavelet transform, DWT, gray-level co-occurrence matrix, GLCM, k-nearest neighbor, KNN, region of interest, ROI

Procedia PDF Downloads 130
18742 Numerical Investigation of Embankment Settlement Improved by Method of Preloading by Vertical Drains

Authors: Seyed Abolhasan Naeini, Saeideh Mohammadi

Abstract:

Time dependent settlement due to loading on soft saturated soils produces many problems such as high consolidation settlements and low consolidation rates. Also, long term consolidation settlement of soft soil underlying the embankment leads to unpredicted settlements and cracks on soil surface. Preloading method is an effective improvement method to solve this problem. Using vertical drains in preloading method is an effective method for improving soft soils. Applying deep soil mixing method on soft soils is another effective method for improving soft soils. There are little studies on using two methods of preloading and deep soil mixing simultaneously. In this paper, the concurrent effect of preloading with deep soil mixing by vertical drains is investigated through a finite element code, Plaxis2D. The influence of parameters such as deep soil mixing columns spacing, existence of vertical drains and distance between them, on settlement and stability factor of safety of embankment embedded on soft soil is investigated in this research.

Keywords: preloading, soft soil, vertical drains, deep soil mixing, consolidation settlement

Procedia PDF Downloads 199
18741 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis

Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan

Abstract:

Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.

Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis

Procedia PDF Downloads 72
18740 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network

Authors: Abdolreza Memari

Abstract:

In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.

Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model

Procedia PDF Downloads 482
18739 A Hybrid Normalized Gradient Correlation Based Thermal Image Registration for Morphoea

Authors: L. I. Izhar, T. Stathaki, K. Howell

Abstract:

Analyzing and interpreting of thermograms have been increasingly employed in the diagnosis and monitoring of diseases thanks to its non-invasive, non-harmful nature and low cost. In this paper, a novel system is proposed to improve diagnosis and monitoring of morphoea skin disorder based on integration with the published lines of Blaschko. In the proposed system, image registration based on global and local registration methods are found inevitable. This paper presents a modified normalized gradient cross-correlation (NGC) method to reduce large geometrical differences between two multimodal images that are represented by smooth gray edge maps is proposed for the global registration approach. This method is improved further by incorporating an iterative-based normalized cross-correlation coefficient (NCC) method. It is found that by replacing the final registration part of the NGC method where translational differences are solved in the spatial Fourier domain with the NCC method performed in the spatial domain, the performance and robustness of the NGC method can be greatly improved. It is shown in this paper that the hybrid NGC method not only outperforms phase correlation (PC) method but also improved misregistration due to translation, suffered by the modified NGC method alone for thermograms with ill-defined jawline. This also demonstrates that by using the gradients of the gray edge maps and a hybrid technique, the performance of the PC based image registration method can be greatly improved.

Keywords: Blaschko’s lines, image registration, morphoea, thermal imaging

Procedia PDF Downloads 295
18738 Comparison of Allowable Stress Method and Time History Response Analysis for Seismic Design of Buildings

Authors: Sayuri Inoue, Naohiro Nakamura, Tsubasa Hamada

Abstract:

The seismic design method of buildings is classified into two types: static design and dynamic design. The static design is a design method that exerts static force as seismic force and is a relatively simple design method created based on the experience of seismic motion in the past 100 years. At present, static design is used for most of the Japanese buildings. Dynamic design mainly refers to the time history response analysis. It is a comparatively difficult design method that input the earthquake motion assumed in the building model and examine the response. Currently, it is only used for skyscrapers and specific buildings. In the present design standard in Japan, it is good to use either the design method of the static design and the dynamic design in the medium and high-rise buildings. However, when actually designing middle and high-rise buildings by two kinds of design methods, the relatively simple static design method satisfies the criteria, but in the case of a little difficult dynamic design method, the criterion isn't often satisfied. This is because the dynamic design method was built with the intention of designing super high-rise buildings. In short, higher safety is required as compared with general buildings, and criteria become stricter. The authors consider applying the dynamic design method to general buildings designed by the static design method so far. The reason is that application of the dynamic design method is reasonable for buildings that are out of the conventional standard structural form such as emphasizing design. For the purpose, it is important to compare the design results when the criteria of both design methods are arranged side by side. In this study, we performed time history response analysis to medium-rise buildings that were actually designed with allowable stress method. Quantitative comparison between static design and dynamic design was conducted, and characteristics of both design methods were examined.

Keywords: buildings, seismic design, allowable stress design, time history response analysis, Japanese seismic code

Procedia PDF Downloads 145
18737 Analyzing Sociocultural Factors Shaping Architects’ Construction Material Choices: The Case of Jordan

Authors: Maiss Razem

Abstract:

The construction sector is considered a major consumer of materials that undergoes processes of extraction, processing, transportation, and maintaining when used in buildings. Several metrics have been devised to capture the environmental impact of the materials consumed during construction using lifecycle thinking. Rarely has the materiality of this sector been explored qualitatively and systemically. This paper aims to explore socio-cultural forces that drive the use of certain materials in the Jordanian construction industry, using practice theory as a heuristic method of analysis, more specifically Shove et al. three-element model. By conducting semi-structured interviews with architects, the results unravel contextually embedded routines when determining qualities of three materialities highlighted herein; stone, glass and spatial openness. The study highlights the inadequacy of only using efficiency as a quantitative metric of sustainable materials and argues for the need to link material consumption with socio-economic, cultural, and aesthetic driving forces. The operationalization of practice theory by tracing materials’ lifetimes as they integrate with competencies and meanings captures dynamic engagements through the analyzed routines of actors in the construction practice. This study can offer policymakers better-nuanced representation to green this sector beyond efficiency rhetoric and quantitative metrics.

Keywords: architects' practices, construction materials, Jordan, practice theory

Procedia PDF Downloads 153
18736 Multi-source Question Answering Framework Using Transformers for Attribute Extraction

Authors: Prashanth Pillai, Purnaprajna Mangsuli

Abstract:

Oil exploration and production companies invest considerable time and efforts to extract essential well attributes (like well status, surface, and target coordinates, wellbore depths, event timelines, etc.) from unstructured data sources like technical reports, which are often non-standardized, multimodal, and highly domain-specific by nature. It is also important to consider the context when extracting attribute values from reports that contain information on multiple wells/wellbores. Moreover, semantically similar information may often be depicted in different data syntax representations across multiple pages and document sources. We propose a hierarchical multi-source fact extraction workflow based on a deep learning framework to extract essential well attributes at scale. An information retrieval module based on the transformer architecture was used to rank relevant pages in a document source utilizing the page image embeddings and semantic text embeddings. A question answering framework utilizingLayoutLM transformer was used to extract attribute-value pairs incorporating the text semantics and layout information from top relevant pages in a document. To better handle context while dealing with multi-well reports, we incorporate a dynamic query generation module to resolve ambiguities. The extracted attribute information from various pages and documents are standardized to a common representation using a parser module to facilitate information comparison and aggregation. Finally, we use a probabilistic approach to fuse information extracted from multiple sources into a coherent well record. The applicability of the proposed approach and related performance was studied on several real-life well technical reports.

Keywords: natural language processing, deep learning, transformers, information retrieval

Procedia PDF Downloads 180
18735 Second Order Analysis of Frames Using Modified Newmark Method

Authors: Seyed Amin Vakili, Sahar Sadat Vakili, Seyed Ehsan Vakili, Nader Abdoli Yazdi

Abstract:

The main purpose of this paper is to present the Modified Newmark Method as a method of non-linear frame analysis by considering the effect of the axial load (second order analysis). The discussion will be restricted to plane frameworks containing a constant cross-section for each element. In addition, it is assumed that the frames are prevented from out-of-plane deflection. This part of the investigation is performed to generalize the established method for the assemblage structures such as frameworks. As explained, the governing differential equations are non-linear and cannot be formulated easily due to unknown axial load of the struts in the frame. By the assumption of constant axial load, the governing equations are changed to linear ones in most methods. Since the modeling and the solutions of the non-linear form of the governing equations are cumbersome, the linear form of the equations would be used in the established method. However, according to the ability of the method to reconsider the minor omitted parameters in modeling during the solution procedure, the axial load in the elements at each stage of the iteration can be computed and applied in the next stage. Therefore, the ability of the method to present an accurate approach to the solutions of non-linear equations will be demonstrated again in this paper.

Keywords: nonlinear, stability, buckling, modified newmark method

Procedia PDF Downloads 401
18734 Reliability-Based Method for Assessing Liquefaction Potential of Soils

Authors: Mehran Naghizaderokni, Asscar Janalizadechobbasty

Abstract:

This paper explores probabilistic method for assessing the liquefaction potential of sandy soils. The current simplified methods for assessing soil liquefaction potential use a deterministic safety factor in order to determine whether liquefaction will occur or not. However, these methods are unable to determine the liquefaction probability related to a safety factor. A solution to this problem can be found by reliability analysis.This paper presents a reliability analysis method based on the popular certain liquefaction analysis method. The proposed probabilistic method is formulated based on the results of reliability analyses of 190 field records and observations of soil performance against liquefaction. The results of the present study show that confidence coefficient greater and smaller than 1 does not mean safety and/or liquefaction in cadence for liquefaction, and for assuring liquefaction probability, reliability based method analysis should be used. This reliability method uses the empirical acceleration attenuation law in the Chalos area to derive the probability density distribution function and the statistics for the earthquake-induced cyclic shear stress ratio (CSR). The CSR and CRR statistics are used in continuity with the first order and second moment method to calculate the relation between the liquefaction probability, the safety factor and the reliability index. Based on the proposed method, the liquefaction probability related to a safety factor can be easily calculated. The influence of some of the soil parameters on the liquefaction probability can be quantitatively evaluated.

Keywords: liquefaction, reliability analysis, chalos area, civil and structural engineering

Procedia PDF Downloads 452
18733 Parallelizing the Hybrid Pseudo-Spectral Time Domain/Finite Difference Time Domain Algorithms for the Large-Scale Electromagnetic Simulations Using Massage Passing Interface Library

Authors: Donggun Lee, Q-Han Park

Abstract:

Due to its coarse grid, the Pseudo-Spectral Time Domain (PSTD) method has advantages against the Finite Difference Time Domain (FDTD) method in terms of memory requirement and operation time. However, since the efficiency of parallelization is much lower than that of FDTD, PSTD is not a useful method for a large-scale electromagnetic simulation in a parallel platform. In this paper, we propose the parallelization technique of the hybrid PSTD-FDTD (HPF) method which simultaneously possesses the efficient parallelizability of FDTD and the quick speed and low memory requirement of PSTD. Parallelization cost of the HPF method is exactly the same as the parallel FDTD, but still, it occupies much less memory space and has faster operation speed than the parallel FDTD. Experiments in distributed memory systems have shown that the parallel HPF method saves up to 96% of the operation time and reduces 84% of the memory requirement. Also, by combining the OpenMP library to the MPI library, we further reduced the operation time of the parallel HPF method by 50%.

Keywords: FDTD, hybrid, MPI, OpenMP, PSTD, parallelization

Procedia PDF Downloads 124
18732 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method

Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent

Abstract:

A method of modelling topography used in the simulation of riverbeds is proposed in this paper, which removes the need for datapoints and measurements of physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools, which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.

Keywords: bed topography, FBM, LBM, shallow water, simulations

Procedia PDF Downloads 77
18731 Kernel Parallelization Equation for Identifying Structures under Unknown and Periodic Loads

Authors: Seyed Sadegh Naseralavi

Abstract:

This paper presents a Kernel parallelization equation for damage identification in structures under unknown periodic excitations. Herein, the dynamic differential equation of the motion of structure is viewed as a mapping from displacements to external forces. Utilizing this viewpoint, a new method for damage detection in structures under periodic loads is presented. The developed method requires only two periods of load. The method detects the damages without finding the input loads. The method is based on the fact that structural displacements under free and forced vibrations are associated with two parallel subspaces in the displacement space. Considering the concept, kernel parallelization equation (KPE) is derived for damage detection under unknown periodic loads. The method is verified for a case study under periodic loads.

Keywords: Kernel, unknown periodic load, damage detection, Kernel parallelization equation

Procedia PDF Downloads 265
18730 MATLAB Supported Learning and Students' Conceptual Understanding of Functions of Two Variables: Experiences from Wolkite University

Authors: Eyasu Gemech, Kassa Michael, Mulugeta Atnafu

Abstract:

A non-equivalent group's quasi-experiment research was conducted at Wolkite University to investigate MATLAB supported learning and students' conceptual understanding in learning Applied Mathematics II using four different comparative instructional approaches: MATLAB supported traditional lecture method, MATLAB supported collaborative method, only collaborative method, and only traditional lecture method. Four intact classes of mechanical engineering groups 1 and 2, garment engineering and textile engineering students were randomly selected out of eight departments. The first three departments were considered as treatment groups and the fourth one 'Textile engineering' was assigned as a comparison group. The departments had 30, 29, 35 and 32 students respectively. The results of the study show that there is a significant mean difference in students' conceptual understanding between groups of students learning through MATLAB supported collaborative method and the other learning approaches. Students who were learned through MATLAB technology-supported learning in combination with collaborative method were found to understand concepts of functions of two variables better than students learning through the other methods of learning. These, hence, are informative of the potential approaches universities would follow for a better students’ understanding of concepts.

Keywords: MATLAB supported collaborative method, MATLAB supported learning, collaborative method, conceptual understanding, functions of two variables

Procedia PDF Downloads 256