Search results for: direct and inverse technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10005

Search results for: direct and inverse technique

8655 Product Features Extraction from Opinions According to Time

Authors: Kamal Amarouche, Houda Benbrahim, Ismail Kassou

Abstract:

Nowadays, e-commerce shopping websites have experienced noticeable growth. These websites have gained consumers’ trust. After purchasing a product, many consumers share comments where opinions are usually embedded about the given product. Research on the automatic management of opinions that gives suggestions to potential consumers and portrays an image of the product to manufactures has been growing recently. After launching the product in the market, the reviews generated around it do not usually contain helpful information or generic opinions about this product (e.g. telephone: great phone...); in the sense that the product is still in the launching phase in the market. Within time, the product becomes old. Therefore, consumers perceive the advantages/ disadvantages about each specific product feature. Therefore, they will generate comments that contain their sentiments about these features. In this paper, we present an unsupervised method to extract different product features hidden in the opinions which influence its purchase, and that combines Time Weighting (TW) which depends on the time opinions were expressed with Term Frequency-Inverse Document Frequency (TF-IDF). We conduct several experiments using two different datasets about cell phones and hotels. The results show the effectiveness of our automatic feature extraction, as well as its domain independent characteristic.

Keywords: opinion mining, product feature extraction, sentiment analysis, SentiWordNet

Procedia PDF Downloads 411
8654 Automatic LV Segmentation with K-means Clustering and Graph Searching on Cardiac MRI

Authors: Hae-Yeoun Lee

Abstract:

Quantification of cardiac function is performed by calculating blood volume and ejection fraction in routine clinical practice. However, these works have been performed by manual contouring,which requires computational costs and varies on the observer. In this paper, an automatic left ventricle segmentation algorithm on cardiac magnetic resonance images (MRI) is presented. Using knowledge on cardiac MRI, a K-mean clustering technique is applied to segment blood region on a coil-sensitivity corrected image. Then, a graph searching technique is used to correct segmentation errors from coil distortion and noises. Finally, blood volume and ejection fraction are calculated. Using cardiac MRI from 15 subjects, the presented algorithm is tested and compared with manual contouring by experts to show outstanding performance.

Keywords: cardiac MRI, graph searching, left ventricle segmentation, K-means clustering

Procedia PDF Downloads 399
8653 Vehicle Routing Problem Considering Alternative Roads under Triple Bottom Line Accounting

Authors: Onur Kaya, Ilknur Tukenmez

Abstract:

In this study, we consider vehicle routing problems on networks with alternative direct links between nodes, and we analyze a multi-objective problem considering the financial, environmental and social objectives in this context. In real life, there might exist several alternative direct roads between two nodes, and these roads might have differences in terms of their lengths and durations. For example, a road might be shorter than another but might require longer time due to traffic and speed limits. Similarly, some toll roads might be shorter or faster but require additional payment, leading to higher costs. We consider such alternative links in our problem and develop a mixed integer linear programming model that determines which alternative link to use between two nodes, in addition to determining the optimal routes for different vehicles, depending on the model objectives and constraints. We consider the minimum cost routing as the financial objective for the company, minimizing the CO2 emissions and gas usage as the environmental objectives, and optimizing the driver working conditions/working hours, and minimizing the risks of accidents as the social objectives. With these objective functions, we aim to determine which routes, and which alternative links should be used in addition to the speed choices on each link. We discuss the results of the developed vehicle routing models and compare their results depending on the system parameters.

Keywords: vehicle routing, alternative links between nodes, mixed integer linear programming, triple bottom line accounting

Procedia PDF Downloads 407
8652 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 343
8651 Mass Media Products Consumption Patterns in Rural South-South, Nigeria Communities

Authors: Inemesit Akpan Umoren, Aniekan James Akpan

Abstract:

Media practitioners and information managers have often erroneously operated on the premise that media messages are received as disseminated to the extent that audiences of whatever background assimilate the content uniformly. This does not subsist since media audiences are often segmented in terms of educational level, social category, place of residence, gender, among others. While those who are highly educated, live in urban areas and are of highest standing are more likely to have direct access to the media, those in the rural areas and of low education and standing, may not have direct or easy access. These, therefore, informed the study to establish the consumption patterns of mass media products by residents of rural communities in south-south, Nigeria. The study, which was anchored on the multi-step flow and social categories theories, adopted a survey research design and a sample of 383 using Mayer’s 1979 guide drawn from nine rural communities in the south-south, Nigeria states of Akwa Ibom, Rivers and Edo. Findings among others showed that while a negligible percentage is highly exposed to media messages of all types, a greater member depend on opinion leaders, social groups, drinking joints, among other such for filtered content. It was concluded that since rural or community media organizations are very vital in ensuring media content get to all audience without necessarily being passing through intermediaries. Among the recommendations was that information managers and media organizations should always have in mind the ruralites while packaging their contents even in the mainstream media.

Keywords: consumption, media, media product, pattern

Procedia PDF Downloads 144
8650 Rheological and Computational Analysis of Crude Oil Transportation

Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh

Abstract:

Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.

Keywords: surfactant, natural, crude oil, rheology, CFD, viscosity

Procedia PDF Downloads 455
8649 Solar Power Forecasting for the Bidding Zones of the Italian Electricity Market with an Analog Ensemble Approach

Authors: Elena Collino, Dario A. Ronzio, Goffredo Decimi, Maurizio Riva

Abstract:

The rapid increase of renewable energy in Italy is led by wind and solar installations. The 2017 Italian energy strategy foresees a further development of these sustainable technologies, especially solar. This fact has resulted in new opportunities, challenges, and different problems to deal with. The growth of renewables allows to meet the European requirements regarding energy and environmental policy, but these types of sources are difficult to manage because they are intermittent and non-programmable. Operationally, these characteristics can lead to instability on the voltage profile and increasing uncertainty on energy reserve scheduling. The increasing renewable production must be considered with more and more attention especially by the Transmission System Operator (TSO). The TSO, in fact, every day provides orders on energy dispatch, once the market outcome has been determined, on extended areas, defined mainly on the basis of power transmission limitations. In Italy, six market zone are defined: Northern-Italy, Central-Northern Italy, Central-Southern Italy, Southern Italy, Sardinia, and Sicily. An accurate hourly renewable power forecasting for the day-ahead on these extended areas brings an improvement both in terms of dispatching and reserve management. In this study, an operational forecasting tool of the hourly solar output for the six Italian market zones is presented, and the performance is analysed. The implementation is carried out by means of a numerical weather prediction model, coupled with a statistical post-processing in order to derive the power forecast on the basis of the meteorological projection. The weather forecast is obtained from the limited area model RAMS on the Italian territory, initialized with IFS-ECMWF boundary conditions. The post-processing calculates the solar power production with the Analog Ensemble technique (AN). This statistical approach forecasts the production using a probability distribution of the measured production registered in the past when the weather scenario looked very similar to the forecasted one. The similarity is evaluated for the components of the solar radiation: global (GHI), diffuse (DIF) and direct normal (DNI) irradiation, together with the corresponding azimuth and zenith solar angles. These are, in fact, the main factors that affect the solar production. Considering that the AN performance is strictly related to the length and quality of the historical data a training period of more than one year has been used. The training set is made by historical Numerical Weather Prediction (NWP) forecasts at 12 UTC for the GHI, DIF and DNI variables over the Italian territory together with corresponding hourly measured production for each of the six zones. The AN technique makes it possible to estimate the aggregate solar production in the area, without information about the technologic characteristics of the all solar parks present in each area. Besides, this information is often only partially available. Every day, the hourly solar power forecast for the six Italian market zones is made publicly available through a website.

Keywords: analog ensemble, electricity market, PV forecast, solar energy

Procedia PDF Downloads 158
8648 Impact of Weather Conditions on Generalized Frequency Division Multiplexing over Gamma Gamma Channel

Authors: Muhammad Sameer Ahmed, Piotr Remlein, Tansal Gucluoglu

Abstract:

The technique called as Generalized frequency division multiplexing (GFDM) used in the free space optical channel can be a good option for implementation free space optical communication systems. This technique has several strengths e.g. good spectral efficiency, low peak-to-average power ratio (PAPR), adaptability and low co-channel interference. In this paper, the impact of weather conditions such as haze, rain and fog on GFDM over the gamma-gamma channel model is discussed. A Trade off between link distance and system performance under intense weather conditions is also analysed. The symbol error probability (SEP) of GFDM over the gamma-gamma turbulence channel is derived and verified with the computer simulations.

Keywords: free space optics, generalized frequency division multiplexing, weather conditions, gamma gamma distribution

Procedia PDF Downloads 174
8647 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error

Authors: Qianhua He, Weili Zhou, Aiwu Chen

Abstract:

A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.

Keywords: speech denoising, sparse representation, k-singular value decomposition, orthogonal matching pursuit

Procedia PDF Downloads 499
8646 Numerical Investigation on Feasibility of Electromagnetic Wave as Water Hardness Detection in Water Cooling System Industrial

Authors: K. H. Teng, A. Shaw, M. Ateeq, A. Al-Shamma'a, S. Wylie, S. N. Kazi, B. T. Chew

Abstract:

Numerical and experimental of using novel electromagnetic wave technique to detect water hardness concentration has been presented in this paper. Simulation is powerful and efficient engineering methods which allow for a quick and accurate prediction of various engineering problems. The RF module is used in this research to predict and design electromagnetic wave propagation and resonance effect of a guided wave to detect water hardness concentration in term of frequency domain, eigenfrequency, and mode analysis. A cylindrical cavity resonator is simulated and designed in the electric field of fundamental mode (TM010). With the finite volume method, the three-dimensional governing equations were discretized. Boundary conditions for the simulation were the cavity materials like aluminum, two ports which include transmitting and receiving port, and assumption of vacuum inside the cavity. The design model was success to simulate a fundamental mode and extract S21 transmission signal within 2.1 – 2.8 GHz regions. The signal spectrum under effect of port selection technique and dielectric properties of different water concentration were studied. It is observed that the linear increment of magnitude in frequency domain when concentration increase. The numerical results were validated closely by the experimentally available data. Hence, conclusion for the available COMSOL simulation package is capable of providing acceptable data for microwave research.

Keywords: electromagnetic wave technique, frequency domain, signal spectrum, water hardness concentration

Procedia PDF Downloads 272
8645 Study of Factors Linked to Alcohol Consumption among Young People from the Lycée De La Convivialité De Kanyosha in Burundi

Authors: Niyiragira Sixte, Jules Verne Nakimana

Abstract:

Introduction: Alcoholism is gradually becoming a public health issue due to its frequency, which continues to increase, especially in schools and at young ages. The general objective of the study was to contribute to the determination of the factors associated with alcohol consumption among young people. Methodology: This descriptive and analytical cross-sectional study entitled “Study of factors associated with alcohol consumption among young people aged 15 to 24. The study was conducted using a non-probability method, and the sampling technique was for convenience. The data collection technique used was the survey by questionnaire and the exploitation of the documentary. Microsoft Word 2013, Microsoft Excel 2.13 and EPI INFO7 software were used for this purpose. Results: The results of in study showed that 43.36% of the students surveyed took alcohol, and the factors associated with alcohol consumption are: religion, smoking and influence from friends. Conclusion: The prevalence of alcohol consumption among young people is very high, and awareness is more than necessary to prevent alcohol-related harm among young people.

Keywords: consumption, alcohol, young people, factors

Procedia PDF Downloads 83
8644 Improved Artificial Bee Colony Algorithm for Non-Convex Economic Power Dispatch Problem

Authors: Badr M. Alshammari, T. Guesmi

Abstract:

This study presents a modified version of the artificial bee colony (ABC) algorithm by including a local search technique for solving the non-convex economic power dispatch problem. The local search step is incorporated at the end of each iteration. Total system losses, valve-point loading effects and prohibited operating zones have been incorporated in the problem formulation. Thus, the problem becomes highly nonlinear and with discontinuous objective function. The proposed technique is validated using an IEEE benchmark system with ten thermal units. Simulation results demonstrate that the proposed optimization algorithm has better convergence characteristics in comparison with the original ABC algorithm.

Keywords: economic power dispatch, artificial bee colony, valve-point loading effects, prohibited operating zones

Procedia PDF Downloads 258
8643 Application of Dual-Stage Sugar Substitution Technique in Tommy Atkins Mangoes

Authors: Rafael A. B. De Medeiros, Zilmar M. P. Barros, Carlos B. O. De Carvalho, Eunice G. Fraga Neta, Maria I. S. Maciel, Patricia M. Azoubel

Abstract:

The use of the sugar substitution technique (D3S) in mango was studied. It consisted of two stages and the use of ultrasound in one or both stages was evaluated in terms of water loss and solid gain. Higher water loss results were found subjecting the fruit samples to ultrasound in the first stage followed by immersion of the samples in Stevia-based solution with application of ultrasound in the second stage, while higher solids gain were obtained without application of ultrasound in second stage. Samples were evaluated in terms of total carotenoids content and total color difference. Samples submitted to ultrasound in both D3S stages presented higher carotenoid retention compared to samples sonicated only in the first stage. Color of man goes after the D3S process showed notable changes.

Keywords: Mangifera indica L., quality, Stevia rebaudiana, ultrasound

Procedia PDF Downloads 403
8642 Erythrophagocytic Role of Mast Cells in vitro and in vivo during Oxidative Stress

Authors: Priyanka Sharma, Niti Puri

Abstract:

Anemia develops when blood lacks enough healthy erythrocytes. Past studies indicated that anemia, inflammatory process, and oxidative stress are interconnected. Erythrocytes are continuously exposed to reactive oxygen species (ROS) during circulation, due to normal aerobic cellular metabolism and also pathology of inflammatory diseases. Systemic mastocytosis and genetic depletion of mast cells have been shown to affect anaemia. In the present study, we attempted to reveal whether mast cells have a direct role in clearance or erythrophagocytosis of normal or oxidatively damaged erythrocytes. Murine erythrocytes were treated with tert-butyl hydroperoxidase (t-BHP), an agent that induces oxidative damage and mimics in vivo oxidative stress. Normal and oxidatively damaged erythrocytes were labeled with carboxyfluorescein succinimidyl ester (CFSE) to track erythrophagocytosis. We show, for the first time, direct erythrophagocytosis of oxidatively damaged erythrocytes in vitro by RBL-2H3 mast cells as well as in vivo by murine peritoneal mast cells. Also, activated mast cells, as may be present in inflammatory conditions, showed a significant increase in the uptake of oxidatively damaged erythrocytes than resting mast cells. This suggests the involvement of mast cells in erythrocyte clearance during oxidative stress or inflammatory disorders. Partial inhibition of phagocytosis by various inhibitors indicated that this process may be controlled by several pathways. Hence, our study provides important evidence for involvement of mast cells in severe anemia due to inflammation and oxidative stress and might be helpful to circumvent the adverse anemic disorders.

Keywords: mast cells, anemia, erythrophagocytosis, oxidatively damaged erythrocytes

Procedia PDF Downloads 254
8641 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting

Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero

Abstract:

In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling‎) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.

Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling

Procedia PDF Downloads 135
8640 A New Formulation Of The M And M-theta Integrals Generalized For Virtual Crack Closure In A Three-dimensional Medium

Authors: Loïc Chrislin Nguedjio, S. Jerome Afoutou, Rostand Moutou Pitti, Benoit Blaysat, Frédéric Dubois, Naman Recho, Pierre Kisito Talla

Abstract:

The safety and durability of structures remain challenging fields that continue to draw the attention of designers. One widely adopted approach is fracture mechanics, which provides methods to evaluate crack stability in complex geometries and under diverse loading conditions. The global energy approach is particularly comprehensive, as it calculates the energy release rate required for crack initiation and propagation using path-independent integrals. This study aims to extend these invariant integrals to include path-independent integrals, with the goal of enhancing the accuracy of failure predictions. The ultimate objective is to create more robust materials while optimizing structural safety and durability. By integrating the real and virtual field method with the virtual crack closure technique, a new formulation of the M-integral is introduced. This formulation establishes a direct relationship between local stresses on the crack faces and the opening displacements, allowing for an accurate calculation of fracture energy. The analytical calculations are grounded in the assumption that the energy needed to close a crack virtually is equal to the energy released during its opening. This novel integral is implemented in a finite element code using Cast3M to simulate cracking criteria within a wood material context. Initially, the numerical calculations are focused on plane strain conditions, but they are later extended to three-dimensional environments, taking into account the orthotropic nature of wood.

Keywords: energy release rate, path-independent integrals, virtual crack closure, orthotropic material

Procedia PDF Downloads 6
8639 Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution

Authors: D. F. Carvalho, A. O. Uscamayta, J. C. Guerrero, H. F. Oliveira, P. M. Azevedo-Marques

Abstract:

The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services.

Keywords: radiotherapy, image processing, DICOM RT, Treatment Planning System (TPS)

Procedia PDF Downloads 296
8638 Neural Style Transfer Using Deep Learning

Authors: Shaik Jilani Basha, Inavolu Avinash, Alla Venu Sai Reddy, Bitragunta Taraka Ramu

Abstract:

We can use the neural style transfer technique to build a picture with the same "content" as the beginning image but the "style" of the picture we've chosen. Neural style transfer is a technique for merging the style of one image into another while retaining its original information. The only change is how the image is formatted to give it an additional artistic sense. The content image depicts the plan or drawing, as well as the colors of the drawing or paintings used to portray the style. It is a computer vision programme that learns and processes images through deep convolutional neural networks. To implement software, we used to train deep learning models with the train data, and whenever a user takes an image and a styled image, the output will be as the style gets transferred to the original image, and it will be shown as the output.

Keywords: neural networks, computer vision, deep learning, convolutional neural networks

Procedia PDF Downloads 95
8637 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques

Authors: Imed Feki, Faouzi Msahli

Abstract:

Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.

Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique

Procedia PDF Downloads 605
8636 Analysis of Barbell Kinematics of Snatch Technique among Women Weightlifters in India

Authors: Manish Kumar Pillai, Madhavi Pathak Pillai, Rajender Lal, Dinesh P. Sharma

Abstract:

India has not yet been able to produce many weightlifters in the past years. Karnam Malleshwari is the only woman to win a medal for India in Olympics. When we try to introspect, there seem to be different reasons. One of the probable cause could be the lack of biomechanical analysis for technique improvements. The analysis of motion in sports has gained prime importance for technical improvement. It helps an athlete to develop a better understanding of his own skills and increasing the rate of technical learning process. Kinematics is concerned with describing and quantifying both the linear and angular position of bodies and their time derivatives. The techniques analysis of barbell movement is very important in weightlifting. But women weightlifting has a shorter history than men’s. Research on women weightlifting based on video analysis is less; there is a lack of scientific evidence based on kinematic analysis of especially on Indian weightlifters at national level are limited. Hence, the present investigation was aimed to analyze the barbell kinematics of women weightlifters in India. The study was delimited to the medal winners of 69-kilogram weight category in the All India Inter-University Competition, age ranging between 18 and 28 years. The variables selected for the mechanical analysis of Barbell kinematics included barbell trajectory, velocity, acceleration, potential energy, kinetic energy, mechanical energy, and average power output. The performance was captured during the competition by two DV PC-60 Digital cameras (Panasonic Company, Ltd). Two cameras were placed 6-meters perpendicular to the plane of the motion, 130 cm. above the ground to record/capture the frontal and lateral view of the lifters simultaneously. Video recordings were analyzed by using Dartfish software, and barbell kinematics were analyzed with the information derived with the help of software. The result documented on the basis of the finding of the study clearly states that there are differences in the selected kinematic variables in all three lifters in respect to their technique in five phases during snatch technique using by them.

Keywords: dartfish, digital camera, kinematic, snatch, weightlifting

Procedia PDF Downloads 136
8635 Development of Single Layer of WO3 on Large Spatial Resolution by Atomic Layer Deposition Technique

Authors: S. Zhuiykov, Zh. Hai, H. Xu, C. Xue

Abstract:

Unique and distinctive properties could be obtained on such two-dimensional (2D) semiconductor as tungsten trioxide (WO3) when the reduction from multi-layer to one fundamental layer thickness takes place. This transition without damaging single-layer on a large spatial resolution remained elusive until the atomic layer deposition (ALD) technique was utilized. Here we report the ALD-enabled atomic-layer-precision development of a single layer WO3 with thickness of 0.77±0.07 nm on a large spatial resolution by using (tBuN)2W(NMe2)2 as tungsten precursor and H2O as oxygen precursor, without affecting the underlying SiO2/Si substrate. Versatility of ALD is in tuning recipe in order to achieve the complete WO3 with desired number of WO3 layers including monolayer. Governed by self-limiting surface reactions, the ALD-enabled approach is versatile, scalable and applicable for a broader range of 2D semiconductors and various device applications.

Keywords: Atomic Layer Deposition (ALD), tungsten oxide, WO₃, two-dimensional semiconductors, single fundamental layer

Procedia PDF Downloads 242
8634 Effect of Temperature on Corrosion Fatigue Cracking Behavior of Inconel 625 in Steam and Supercritical Water

Authors: Hasan Izhar Khan, Naiqiang Zhang, Hong Xu, Zhongliang Zhu, Dongfang Jiang

Abstract:

Inconel 625 is a nickel-based alloy having outstanding corrosion resistance and developed for use at service temperatures ranging from cryogenic to 980°C. It got a wide range of applications in nuclear, petrochemical, chemical, marine, aeronautical, and aerospace industries. Currently, it is one of the candidate materials to be used as a structural material in ultra-supercritical (USC) power plants. In the high-temperature corrosive medium environment, metallic materials are susceptible to corrosion fatigue (CF). CF is an interaction between cyclic stress and corrosive medium environment that acts on a susceptible material and results in initiation and propagation of cracks. For the application of Inconel 625 as a structural material in USC power plants, CF behavior must be evaluated in steam and supercritical water (SCW) environment. Fatigue crack growth rate (FCGR) curves obtained from CF experiments are required to predict residual life of metallic materials used in power plants. In this study, FCGR tests of Inconel 625 were obtained by using compact tension specimen at 550-650 °C in steam (8 MPa) and SCW (25 MPa). The dissolved oxygen level was kept constant at 8000 ppb for the test conducted in steam and SCW. The tests were performed under sine wave loading waveform, 1 Hz loading frequency, stress ratio of 0.6 and maximum stress intensity factor of 32 MPa√m. Crack growth rate (CGR) was detected by using direct current potential drop technique. Results showed that CGR increased with an increase in temperature in the tested environmental conditions. The mechanism concerning the influence of temperature on FCGR are further discussed.

Keywords: corrosion fatigue, crack growth rate, nickel-based alloy, temperature

Procedia PDF Downloads 131
8633 Nation Branding: Guidelines for Identity Development and Image Perception of Thailand Brand in Health and Wellness Tourism

Authors: Jiraporn Prommaha

Abstract:

The purpose of this research is to study the development of Thailand Brand Identity and the perception of its image in order to find any guidelines for the identity development and the image perception of Thailand Brand in Health and Wellness Tourism. The paper is conducted through mixed methods research, both the qualitative and quantitative researches. The qualitative focuses on the in-depth interview of executive administrations from public and private sectors involved scholars and experts in identity and image issue, main 11 people. The quantitative research was done by the questionnaires to collect data from foreign tourists 800; Chinese tourists 400 and UK tourists 400. The technique used for this was the Exploratory Factor Analysis (EFA), this was to determine the relation between the structures of the variables by categorizing the variables into group by applying the Varimax rotation technique. This technique showed recognition the Thailand brand image related to the 2 countries, China and UK. The results found that guidelines for brand identity development and image perception of health and wellness tourism in Thailand; as following (1) Develop communication in order to understanding of the meaning of the word 'Health and beauty tourism' throughout the country, (2) Develop human resources as a national agenda, (3) Develop awareness rising in the conservation and preservation of natural resources of the country, (4) Develop the cooperation of all stakeholders in Health and Wellness Businesses, (5) Develop digital communication throughout the country and (6) Develop safety in Tourism.

Keywords: brand identity, image perception, nation branding, health and wellness tourism, mixed methods research

Procedia PDF Downloads 200
8632 Compression Strength of Treated Fine-Grained Soils with Epoxy or Cement

Authors: M. Mlhem

Abstract:

Geotechnical engineers face many problematic soils upon construction and they have the choice for replacing these soils with more appropriate soils or attempting to improve the engineering properties of the soil through a suitable soil stabilization technique. Mostly, improving soils is environmental, easier and more economical than other solutions. Stabilization soils technique is applied by introducing a cementing agent or by injecting a substance to fill the pore volume. Chemical stabilizers are divided into two groups: traditional agents such as cement or lime and non-traditional agents such as polymers. This paper studies the effect of epoxy additives on the compression strength of four types of soil and then compares with the effect of cement on the compression strength for the same soils. Overall, the epoxy additives are more effective in increasing the strength for different types of soils regardless its classification. On the other hand, there was no clear relation between studied parameters liquid limit, passing No.200, unit weight and between the strength of samples for different types of soils.

Keywords: additives, clay, compression strength, epoxy, stabilization

Procedia PDF Downloads 128
8631 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory

Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan

Abstract:

Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.

Keywords: data fusion, Dempster-Shafer theory, data mining, event detection

Procedia PDF Downloads 411
8630 Therapeutic Drug Monitoring by Dried Blood Spot and LC-MS/MS: Novel Application to Carbamazepine and Its Metabolite in Paediatric Population

Authors: Giancarlo La Marca, Engy Shokry, Fabio Villanelli

Abstract:

Epilepsy is one of the most common neurological disorders, with an estimated prevalence of 50 million people worldwide. Twenty five percent of the epilepsy population is represented in children under the age of 15 years. For antiepileptic drugs (AED), there is a poor correlation between plasma concentration and dose especially in children. This was attributed to greater pharmacokinetic variability than adults. Hence, therapeutic drug monitoring (TDM) is recommended in controlling toxicity while drug exposure is maintained. Carbamazepine (CBZ) is a first-line AED and the drug of first choice in trigeminal neuralgia. CBZ is metabolised in the liver into carbamazepine-10,11-epoxide (CBZE), its major metabolite which is equipotent. This develops the need for an assay able to monitor the levels of both CBZ and CBZE. The aim of the present study was to develop and validate a LC-MS/MS method for simultaneous quantification of CBZ and CBZE in dried blood spots (DBS). DBS technique overcomes many logistical problems, ethical issues and technical challenges faced by classical plasma sampling. LC-MS/MS has been regarded as superior technique over immunoassays and HPLC/UV methods owing to its better specificity and sensitivity, lack of interference or matrix effects. Our method combines advantages of DBS technique and LC-MS/MS in clinical practice. The extraction process was done using methanol-water-formic acid (80:20:0.1, v/v/v). The chromatographic elution was achieved by using a linear gradient with a mobile phase consisting of acetonitrile-water-0.1% formic acid at a flow rate of 0.50 mL/min. The method was linear over the range 1-40 mg/L and 0.25-20 mg/L for CBZ and CBZE respectively. The limit of quantification was 1.00 mg/L and 0.25 mg/L for CBZ and CBZE, respectively. Intra-day and inter-day assay precisions were found to be less than 6.5% and 11.8%. An evaluation of DBS technique was performed, including effect of extraction solvent, spot homogeneity and stability in DBS. Results from a comparison with the plasma assay are also presented. The novelty of the present work lies in being the first to quantify CBZ and its metabolite from only one 3.2 mm DBS disc finger-prick sample (3.3-3.4 µl blood) by LC-MS/MS in a 10 min. chromatographic run.

Keywords: carbamazepine, carbamazepine-10, 11-epoxide, dried blood spots, LC-MS/MS, therapeutic drug monitoring

Procedia PDF Downloads 417
8629 Fabrication and Characterization of Gelatin Nanofibers Dissolved in Concentrated Acetic Acid

Authors: Kooshina Koosha, Sima Habibi, Azam Talebian

Abstract:

Electrospinning is a simple, versatile and widely accepted technique to produce ultra-fine fibers ranging from nanometer to micron. Recently there has been great interest in developing this technique to produce nanofibers with novel properties and functionalities. The electrospinning field is extremely broad, and consequently there have been many useful reviews discussing various aspects from detailed fiber formation mechanism to the formation of nanofibers and to discussion on a wide range of applications. On the other hand, the focus of this study is quite narrow, highlighting electrospinning parameters. This work will briefly cover the solution and processing parameters (for instance; concentration, solvent type, voltage, flow rate, distance between the collector and the tip of the needle) impacting the morphological characteristics of nanofibers, such as diameter. In this paper, a comprehensive work would be presented on the research of producing nanofibers from natural polymer entitled Gelatin.

Keywords: electrospinning, solution parameters, process parameters, natural fiber

Procedia PDF Downloads 274
8628 Dishonesty and Achievement: An Experiment of Self-Revealing Individual Cheating

Authors: Gideon Yaniv, Erez Siniver, Yossef Tobol

Abstract:

The extensive body of economic and psychological research correlating between students' cheating and their grade point average (GPA) consistently finds a significant negative relationship between cheating and the GPA. However, this literature is entirely based on students' responses to direct question surveys that inquire whether they have ever cheated on their academic assignments. The present paper reports the results of a two-round experiment designed to expose student cheating at the individual level and correlate it with their GPAs. The experiment involved two classes of third-year economics students incentivized by a competitive reward to answer a multiple-choice trivia quiz without consulting their electronic devices. While this forbiddance was deliberately overlooked in the first round, providing an opportunity to cheat, it was strictly enforced in the second, conducted two months later in the same classes with the same quiz. A comparison of subjects' performance in the two rounds, self-revealed a considerable extent of cheating in the first one. Regressing the individual cheating levels on subjects' gender and GPA exhibited no significant differences in cheating between males and females. However, cheating of both genders was found to significantly increase with their GPA, implying, in sharp contrast with the direct question surveys, that higher achievers are bigger cheaters. A second experiment, which allowed subjects to answer the quiz in the privacy of their own cars, reveals that when really feeling safe to cheat, many subjects would cheat maximally, challenging the literature's claim that people generally cheat modestly.

Keywords: academic achievement, cheating behavior, experimental data, grade-point average

Procedia PDF Downloads 209
8627 Modelling Interactions between Saturated and Unsaturated Zones by Hydrus 1D, Plain of Kairouan, Central Tunisia

Authors: Mariem Saadi, Sabri Kanzari, Adel Zghibi

Abstract:

In semi-arid areas like the Kairouan region, the constant irrigation with saline water and the overuse of groundwater resources, soils and aquifers salinization has become an increasing concern. In this study, a methodology has been developed to evaluate the groundwater contamination risk based on the unsaturated zone hydraulic properties. Two soil profiles with different ranges of salinity, one located in the north of the plain and another one in the south of plain (each 30 m deep) and both characterized by direct recharge of the aquifer were chosen. Simulations were conducted with Hydrus-1D code using measured precipitation data for the period 1998-2003 and calculated evapotranspiration for both chosen profiles. Four combinations of initial conditions of water content and salt concentration were used for the simulation process in order to find the best match between simulated and measured values. The success of the calibration of Hydrus-1D allowed the investigation of some scenarios in order to assess the contamination risk under different natural conditions. The aquifer risk contamination is related to the natural conditions where it increased while facing climate change and temperature increase and decreased in the presence of a clay layer in the unsaturated zone. Hydrus-1D was a useful tool to predict the groundwater level and quality in the case of a direct recharge and in the absence of any information related to the soil layers except for the texture.

Keywords: Hydrus-1D, Kairouan, salinization, semi-arid region, solute transport, unsaturated zone

Procedia PDF Downloads 183
8626 Bridging the Data Gap for Sexism Detection in Twitter: A Semi-Supervised Approach

Authors: Adeep Hande, Shubham Agarwal

Abstract:

This paper presents a study on identifying sexism in online texts using various state-of-the-art deep learning models based on BERT. We experimented with different feature sets and model architectures and evaluated their performance using precision, recall, F1 score, and accuracy metrics. We also explored the use of pseudolabeling technique to improve model performance. Our experiments show that the best-performing models were based on BERT, and their multilingual model achieved an F1 score of 0.83. Furthermore, the use of pseudolabeling significantly improved the performance of the BERT-based models, with the best results achieved using the pseudolabeling technique. Our findings suggest that BERT-based models with pseudolabeling hold great promise for identifying sexism in online texts with high accuracy.

Keywords: large language models, semi-supervised learning, sexism detection, data sparsity

Procedia PDF Downloads 70