Search results for: soxhlet extraction method
18553 A Trapezoidal-Like Integrator for the Numerical Solution of One-Dimensional Time Dependent Schrödinger Equation
Authors: Johnson Oladele Fatokun, I. P. Akpan
Abstract:
In this paper, the one-dimensional time dependent Schrödinger equation is discretized by the method of lines using a second order finite difference approximation to replace the second order spatial derivative. The evolving system of stiff ordinary differential equation (ODE) in time is solved numerically by an L-stable trapezoidal-like integrator. Results show accuracy of relative maximum error of order 10-4 in the interval of consideration. The performance of the method as compared to an existing scheme is considered favorable.Keywords: Schrodinger’s equation, partial differential equations, method of lines (MOL), stiff ODE, trapezoidal-like integrator
Procedia PDF Downloads 41818552 Influence of the Coarse-Graining Method on a DEM-CFD Simulation of a Pilot-Scale Gas Fluidized Bed
Authors: Theo Ndereyimana, Yann Dufresne, Micael Boulet, Stephane Moreau
Abstract:
The DEM (Discrete Element Method) is used a lot in the industry to simulate large-scale flows of particles; for instance, in a fluidized bed, it allows to predict of the trajectory of every particle. One of the main limits of the DEM is the computational time. The CGM (Coarse-Graining Method) has been developed to tackle this issue. The goal is to increase the size of the particle and, by this means, decrease the number of particles. The method leads to a reduction of the collision frequency due to the reduction of the number of particles. Multiple characteristics of the particle movement and the fluid flow - when there is a coupling between DEM and CFD (Computational Fluid Dynamics). The main characteristic that is impacted is the energy dissipation of the system, to regain the dissipation, an ADM (Additional Dissipative Mechanism) can be added to the model. The objective of this current work is to observe the influence of the choice of the ADM and the factor of coarse-graining on the numerical results. These results will be compared with experimental results of a fluidized bed and with a numerical model of the same fluidized bed without using the CGM. The numerical model is one of a 3D cylindrical fluidized bed with 9.6M Geldart B-type particles in a bubbling regime.Keywords: additive dissipative mechanism, coarse-graining, discrete element method, fluidized bed
Procedia PDF Downloads 7018551 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data
Authors: S. Jurado, E. Pazmino
Abstract:
Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.Keywords: medial axis, pore-throat distribution, porosity, porous media
Procedia PDF Downloads 11618550 An Inverse Heat Transfer Algorithm for Predicting the Thermal Properties of Tumors during Cryosurgery
Authors: Mohamed Hafid, Marcel Lacroix
Abstract:
This study aimed at developing an inverse heat transfer approach for predicting the time-varying freezing front and the temperature distribution of tumors during cryosurgery. Using a temperature probe pressed against the layer of tumor, the inverse approach is able to predict simultaneously the metabolic heat generation and the blood perfusion rate of the tumor. Once these parameters are predicted, the temperature-field and time-varying freezing fronts are determined with the direct model. The direct model rests on one-dimensional Pennes bioheat equation. The phase change problem is handled with the enthalpy method. The Levenberg-Marquardt Method (LMM) combined to the Broyden Method (BM) is used to solve the inverse model. The effect (a) of the thermal properties of the diseased tissues; (b) of the initial guesses for the unknown thermal properties; (c) of the data capture frequency; and (d) of the noise on the recorded temperatures is examined. It is shown that the proposed inverse approach remains accurate for all the cases investigated.Keywords: cryosurgery, inverse heat transfer, Levenberg-Marquardt method, thermal properties, Pennes model, enthalpy method
Procedia PDF Downloads 20018549 Establishment and Application of Numerical Simulation Model for Shot Peen Forming Stress Field Method
Authors: Shuo Tian, Xuepiao Bai, Jianqin Shang, Pengtao Gai, Yuansong Zeng
Abstract:
Shot peen forming is an essential forming process for aircraft metal wing panel. With the development of computer simulation technology, scholars have proposed a numerical simulation method of shot peen forming based on stress field. Three shot peen forming indexes of crater diameter, shot speed and surface coverage are required as simulation parameters in the stress field method. It is necessary to establish the relationship between simulation and experimental process parameters in order to simulate the deformation under different shot peen forming parameters. The shot peen forming tests of the 2024-T351 aluminum alloy workpieces were carried out using uniform test design method, and three factors of air pressure, feed rate and shot flow were selected. The second-order response surface model between simulation parameters and uniform test factors was established by stepwise regression method using MATLAB software according to the results. The response surface model was combined with the stress field method to simulate the shot peen forming deformation of the workpiece. Compared with the experimental results, the simulated values were smaller than the corresponding test values, the maximum and average errors were 14.8% and 9%, respectively.Keywords: shot peen forming, process parameter, response surface model, numerical simulation
Procedia PDF Downloads 8718548 Experiment of Geophysical Exploration in Egypt
Authors: Ramadan Fayez Zowaid Hussein
Abstract:
Exploration geophysics is an applied branch of geophysics, and it is very important to use such a method in Egypt and not just Egypt but in Africa and the Middle East. This research aims to work deeply on the importance of this method, and this paper focuses more on the benefits of the exploration of geophysics and how to apply it to scientific methods. It helps to discover earthquakes and assist in seismology. It also helps to map the surface structure of a region and also magnetic techniques, including aeromagnetic surveys to map magnetic anomalies. This is known that having a great experience in this field as it was very interesting reading a lot and searching about this matter and this technology, and all was found made this fantastic: as the method is existing and we do not use it. It costs a lot, but one believes that this method is very important; for example, in discovering earthquakes, check the surface of the ground easily; it makes us see the surface of the ground clearly so we can find the elements of the earth easily. In conclusion, geophysical exploration use is very important, and it must be highlighted and considered to be discussed in the Middle East, not just in the Middle East but also in Africa.Keywords: geophysics, magnetic, gravitational, hydrocarbon exploration
Procedia PDF Downloads 8718547 Chromatography Study of Fundamental Properties of Medical Radioisotope Astatine-211
Authors: Evgeny E. Tereshatov
Abstract:
Astatine-211 is considered one of the most promising radionuclides for Targeted Alpha Therapy. In order to develop reliable procedures to label biomolecules and utilize efficient delivery vehicle principles, one should understand the main chemical characteristics of astatine. The short half-life of 211At (~7.2 h) and absence of any stable isotopes of this element are limiting factors towards studying the behavior of astatine. Our team has developed a procedure for rapid and efficient isolation of astatine from irradiated bismuth material in nitric acid media based on 3-octanone and 1-octanol extraction chromatography resins. This process has been automated and it takes 20 min from the beginning of the target dissolution to the At-211 fraction elution. Our next step is to consider commercially available chromatography resins and their applicability in astatine purification in the same media. Results obtained along with the corresponding sorption mechanisms will be discussed.Keywords: astatine-211, chromatography, automation, mechanism, radiopharmaceuticals
Procedia PDF Downloads 9218546 An Implementation of Multi-Media Applications in Teaching Structural Design to Architectural Students
Authors: Wafa Labib
Abstract:
Teaching methods include lectures, workshops and tutorials for the presentation and discussion of ideas have become out of date; were developed outside the discipline of architecture from the college of engineering and do not satisfy the architectural students’ needs and causes them many difficulties in integrating structure into their design. In an attempt to improve structure teaching methods, this paper focused upon proposing a supportive teaching/learning tool using multi-media applications which seeks to better meet the architecture student’s needs and capabilities and improve the understanding and application of basic and intermediate structural engineering and technology principles. Before introducing the use of multi-media as a supportive teaching tool, a questionnaire was distributed to third year students of a structural design course who were selected as a sample to be surveyed forming a sample of 90 cases. The primary aim of the questionnaire was to identify the students’ learning style and to investigate whether the selected method of teaching could make the teaching and learning process more efficient. Students’ reaction on the use of this method was measured using three key elements indicating that this method is an appropriate teaching method for the nature of the students and the course as well.Keywords: teaching method, architecture, learning style, multi-media
Procedia PDF Downloads 43718545 Appraisal of Humanitarian Supply Chain Risks Using Best-Worst Method
Authors: Ali Mohaghar, Iman Ghasemian Sahebi, Alireza Arab
Abstract:
In the last decades, increasing in human and natural disaster occurrence had very irreparable effects on human life. Hence, one of the important issues in humanitarian supply chain management is identifying and prioritizing the different risks and finding suitable solutions for encountering them at the time of disaster occurrence. This study is an attempt to provide a comprehensive review of humanitarian supply chain risks in a case study of Tehran Red Crescent Societies. For this purpose, Best-Worst method (BWM) has been used for analyzing the risks of the humanitarian supply chain. 22 risks of the humanitarian supply chain were identified based on the literature and interviews with four experts. According to BWM method, the importance of each risk was calculated. The findings showed that culture contexts, little awareness of people, and poor education system are the most important humanitarian supply chain risks. This research provides a useful guideline for managers so that they can benefit from the results to prioritize their solutions.Keywords: Best-Worst Method, humanitarian logistics, humanitarian supply chain, risk management
Procedia PDF Downloads 31018544 Vibration Analysis and Optimization Design of Ultrasonic Horn
Authors: Kuen Ming Shu, Ren Kai Ho
Abstract:
Ultrasonic horn has the functions of amplifying amplitude and reducing resonant impedance in ultrasonic system. Its primary function is to amplify deformation or velocity during vibration and focus ultrasonic energy on the small area. It is a crucial component in design of ultrasonic vibration system. There are five common design methods for ultrasonic horns: analytical method, equivalent circuit method, equal mechanical impedance, transfer matrix method, finite element method. In addition, the general optimization design process is to change the geometric parameters to improve a single performance. Therefore, in the general optimization design process, we couldn't find the relation of parameter and objective. However, a good optimization design must be able to establish the relationship between input parameters and output parameters so that the designer can choose between parameters according to different performance objectives and obtain the results of the optimization design. In this study, an ultrasonic horn provided by Maxwide Ultrasonic co., Ltd. was used as the contrast of optimized ultrasonic horn. The ANSYS finite element analysis (FEA) software was used to simulate the distribution of the horn amplitudes and the natural frequency value. The results showed that the frequency for the simulation values and actual measurement values were similar, verifying the accuracy of the simulation values. The ANSYS DesignXplorer was used to perform Response Surface optimization, which could shows the relation of parameter and objective. Therefore, this method can be used to substitute the traditional experience method or the trial-and-error method for design to reduce material costs and design cycles.Keywords: horn, natural frequency, response surface optimization, ultrasonic vibration
Procedia PDF Downloads 11718543 A Study on Weight-Reduction of Double Deck High-Speed Train Using Size Optimization Method
Authors: Jong-Yeon Kim, Kwang-Bok Shin, Tae-Hwan Ko
Abstract:
The purpose of this paper is to suggest a weight-reduction design method for the aluminum extrusion carbody structure of a double deck high-speed train using size optimization method. The size optimization method was used to optimize thicknesses of skin and rib of the aluminum extrusion for the carbody structure. Thicknesses of 1st underframe, 2nd underframe, solebar and roof frame were selected by design variables in order to conduct size optimization. The results of the size optimization analysis showed that the weight of the aluminum extrusion could be reduced by 0.61 tons (5.60%) compared to the weight of the original carbody structure.Keywords: double deck high-speed train, size optimization, weigh-reduction, aluminum extrusion
Procedia PDF Downloads 29018542 COVID–19 Impact on Passenger and Cargo Traffic: A Case Study
Authors: Maja Čović, Josipa Bojčić, Bruna Bacalja, Gorana Jelić Mrčelić
Abstract:
The appearance of the COVID-19 disease and its fast-spreading brought global pandemic and health crisis. In order to prevent the further spreading of the virus, the governments had implemented mobility restriction rules which left a negative mark on the world’s economy. Although there is numerous research on the impact of COVID-19 on marine traffic around the world, the objective of this paper is to consider the impact of COVID-19 on passenger and cargo traffic in Port of Split, in the Republic of Croatia. Methods used to make the theoretical and research part of the paper are descriptive method, comparative method, compilation, inductive method, deductive method, and statistical method. Paper relies on data obtained via Port of Split Authority and analyses trends in passenger and cargo traffic, including the year 2020, when the pandemic broke. Significant reductions in income, disruptions in transportation and traffic, as well as other maritime services are shown in the paper. This article also observes a significant decline in passenger traffic, cruising traffic and also observes the dynamic of cargo traffic inside the port of Split.Keywords: COVID-19, pandemic, passenger traffic, ports, trends, cargo traffic
Procedia PDF Downloads 21618541 Development of the Analysis and Pretreatment of Brown HT in Foods
Authors: Hee-Jae Suh, Mi-Na Hong, Min-Ji Kim, Yeon-Seong Jeong, Ok-Hwan Lee, Jae-Wook Shin, Hyang-Sook Chun, Chan Lee
Abstract:
Brown HT is a bis-azo dye which is permitted in EU as a food colorant. So far, many studies have focused on HPLC using diode array detection (DAD) analysis for detection of this food colorant with different columns and mobile phases. Even though these methods make it possible to detect Brown HT, low recovery, reproducibility, and linearity are still the major limitations for the application in foods. The purpose of this study was to compare various methods for the analysis of Brown HT and to develop an improved analytical methods including pretreatment. Among tested analysis methods, best resolution of Brown HT was observed when the following solvent was applied as a eluent; solvent A of mobile phase was 0.575g NH4H2PO4, and 0.7g Na2HPO4 in 500mL water added with 500mL methanol. The pH was adjusted using phosphoric acid to pH 6.9 and solvent B was methanol. Major peak for Brown HT appeared at the end of separation, 13.4min after injection. This method exhibited relatively high recovery and reproducibility compared with other methods. LOD (0.284 ppm), LOQ (0.861 ppm), resolution (6.143), and selectivity (1.3) of this method were better than those of ammonium acetate solution method which was most frequently used. Precision and accuracy were verified through inter-day test and intra-day test. Various methods for sample pretreatments were developed for different foods and relatively high recovery over 80% was observed in all case. This method exhibited high resolution and reproducibility of Brown HT compared with other previously reported official methods from FSA and, EU regulation.Keywords: analytic method, Brown HT, food colorants, pretreatment method
Procedia PDF Downloads 47918540 Photoluminescence in Cerium Doped Fluorides Prepared by Slow Precipitation Method
Authors: Aarti Muley, S. J. Dhoblae
Abstract:
CaF₂ and BaF₂ doped with cerium were prepared by slow precipitation method with different molar concentration and different cerium concentration. Both the samples were also prepared by direct method for comparison. The XRD of BaF₂:Ce shows that it crystallizes to BCC structure. The peak matches with JCPDS file no. 4-0452. Also, The XRD pattern of CaF₂:Ce matches well with the JCPDS file number 75- 0363 and crystallized to BCC phase. In CaF₂, the double-humped photoluminescence spectra were observed at 320nm and 340nm when the sample was prepared by the direct precipitation method, and the ratio between these peaks is unity. However when the sample prepared by slow precipitation method the double-humped emission spectra of CaF₂:Ce was observed at 323nm and 340nm. The ratio between these peaks is 0.58, and the optimum concentration is obtained for 0.1 molar CaF₂ with Ce concentration 1.5%. When the cerium concentration is increased by 2% the peak at 323nm vanishes, and the emission was observed at 342nm with the shoulder at 360nm. In this case, the intensity reduces drastically. The excitation is observed at 305nm with a small peak at 254nm. One molar BaF₂ doped with 0.1% of cerium was synthesized by direct precipitation method gives double humped spectra at 308nm and 320nm, when it is prepared with slow precipitation method with the cerium concentration 0.05m%, 0.1m%, 0.15m%, 0.2m% the broad emission is observed around 325nm with the shoulder at 350nm. The excitation spectra are narrow and observed at 290nm. As the percentage of cerium is increased further again shift is observed. The emission spectra were observed at 360nm with a small peak at 330nm. The phenomenon of shifting of emission spectra at low concentration of cerium can directly relate with the particle size and reported for nanomaterials also.Keywords: calcium fluoride, barium fluoride, photoluminescence, slow precipitation method
Procedia PDF Downloads 10918539 Vibration Analysis of Pendulum in a Viscous Fluid by Analytical Methods
Authors: Arash Jafari, Mehdi Taghaddosi, Azin Parvin
Abstract:
In this study, a vibrational differential equation governing on swinging single-degree-of-freedom pendulum in a viscous fluid has been investigated. The damping process is characterized according to two different regimes: at first, damping in stationary viscous fluid, in the second, damping in flowing viscous fluid with constant velocity. Our purpose is to enhance the ability of solving the mentioned nonlinear differential equation with a simple and innovative approach. Comparisons are made between new method and Numerical Method (rkf45). The results show that this method is very effective and simple and can be applied for other nonlinear problems.Keywords: oscillating systems, angular frequency and damping ratio, pendulum at fluid, locus of maximum
Procedia PDF Downloads 33718538 Influence of Cryo-Grinding on Antioxidant Activity and Amount of Free Phenolic Acids, Rutin and Tyrosol in Whole Grain Buckwheat and Pumpkin Seed Cake
Authors: B. Voucko, M. Benkovic, N. Cukelj, S. Drakula, D. Novotni, S. Balbino, D. Curic
Abstract:
Oxidative stress is considered as one of the causes leading to metabolic disorders in humans. Therefore, the ability of antioxidants to inhibit free radical production is their primary role in the human organism. Antioxidants originating from cereals, especially flavonoids and polyphenols, are mostly bound and indigestible. Micronization damages the cell wall which consecutively results in bioactive material to be more accessible in vivo. In order to ensure complete fragmentation, micronization is often combined with high temperatures (e.g., for bran 200°C) which can lead to degradation of bioactive compounds. The innovative non-thermal technology of cryo-milling is an ultra-fine micronization method that uses liquid nitrogen (LN2) at a temperature of 195°C to freeze and cool the sample during milling. Freezing at such low temperatures causes the material to become brittle which ensures the generation of fine particles while preserving the bioactive content of the material. The aim of this research was to determine if production of ultra-fine material with cryo-milling will result in the augmentation of available bioactive compounds of buckwheat and pumpkin seed cake. For that reason, buckwheat and pumpkin seed cake were ground in a ball mill (CryoMill, Retch, Germany) with and without the use of LN2 for 8 minutes, in a 50 mL stainless steel jar containing one grinding ball (Ø 25 mm) at an oscillation frequency of 30 Hz. The cryo-milled samples were cooled with LN2 for 2 minutes prior to milling, followed by the first cycle of milling (4 minutes), intermediary cooling (2 minutes), and finally the second cycle of milling (further 4 minutes). A continuous process of milling was applied to the samples ground without freezing with LN2. Particle size distribution was determined using the Scirocco 2000 dry dispersion unit (Malvern Instruments, UK). Antioxidant activity was determined by 2,2-Diphenyl-1-picrylhydrazyl (DPPH) test and ferric reducing antioxidant power (FRAP) assay, while the total phenol content was determined using the Folin Ciocalteu method, using the ultraviolet-visible spectrophotometer (Specord 50 Plus, Germany). The content of the free phenolic acids, rutin in buckwheat, tyrosol in pumpkin seed cake, was determined with an HPLC-PDA method (Agilent 1200 series, Germany). Cryo-milling resulted in 11 times smaller size of buckwheat particles, and 3 times smaller size of pumpkin seed particles than milling without the use of LN2, but also, a lower uniformity of the particle size distribution. Lack of freezing during milling of pumpkin seed cake caused a formation of agglomerates due to its high-fat content (21 %). Cryo-milling caused augmentation of buckwheat flour antioxidant activity measured by DPPH test (23,9%) and an increase in available rutin content (14,5%). Also, it resulted in an augmentation of the total phenol content (36,9%) and available tyrosol content (12,5%) of pumpkin seed cake. Antioxidant activity measured with the FRAP test, as well as the content of phenolic acids remained unchanged independent of the milling process. The results of this study showed the potential of cryo-milling for complete raw material utilization in the food industry, as well as a tool for extraction of aimed bioactive components.Keywords: bioactive, ball-mill, buckwheat, cryo-milling, pumpkin seed cake
Procedia PDF Downloads 13218537 Investigation of the Effect of Teaching Thinking and Research Lesson by Cooperative and Traditional Methods on Creativity of Sixth Grade Students
Authors: Faroogh Khakzad, Marzieh Dehghani, Elahe Hejazi
Abstract:
The present study investigates the effect of teaching a Thinking and Research lesson by cooperative and traditional methods on the creativity of sixth-grade students in Piranshahr province. The statistical society includes all the sixth-grade students of Piranshahr province. The sample of this studytable was selected by available sampling from among male elementary schools of Piranshahr. They were randomly assigned into two groups of cooperative teaching method and traditional teaching method. The design of the study is quasi-experimental with a control group. In this study, to assess students’ creativity, Abedi’s creativity questionnaire was used. Based on Cronbach’s alpha coefficient, the reliability of the factor flow was 0.74, innovation was 0.61, flexibility was 0.63, and expansion was 0.68. To analyze the data, t-test, univariate and multivariate covariance analysis were used for evaluation of the difference of means and the pretest and posttest scores. The findings of the research showed that cooperative teaching method does not significantly increase creativity (p > 0.05). Moreover, cooperative teaching method was found to have significant effect on flow factor (p < 0.05), but in innovation and expansion factors no significant effect was observed (p < 0.05).Keywords: cooperative teaching method, traditional teaching method, creativity, flow, innovation, flexibility, expansion, thinking and research lesson
Procedia PDF Downloads 31618536 The Effect of Goal Setting on Psychological Status and Freestyle Swimming Performance in Young Competitive Swimmers
Authors: Sofiene Amara, Mohamed Ali Bahri, Sabri Gaied Chortane
Abstract:
The purpose of this study was to examine the effect of personal goal setting on psychological parameters (cognitive anxiety, somatic anxiety, and self-confidence) and the 50m freestyle performance. 30 young swimmers participated in this investigation, and was divided into three groups, the first group (G1, n = 10, 14 ± 0.7 years old) was prepared for the competition without a fixed target (method 1), the second group (G2, n = 10, 14 ± 0.9 years old) was oriented towards a vague goal 'Do your best' (method 2), while the third group (G3, n = 10, 14 ± 0, 5 years old) was invited to answer a goal that is difficult to reach according to a goal-setting interval (GST) (method 3). According to the statistical data of the present investigation, the cognitive and somatic anxiety scores in G1 and G3 were higher than in G2 (G1-G2, G3-G2: cognitive anxiety, P = 0.000, somatic anxiety: P = 0.000 respectively). On the other hand, the self-confidence score was lower in G1 compared with the other two groups (G1-G2, G3-G2: P = 0.02, P = 0.03 respectively). Our assessment also shows that the 50m freestyle time performance was improved better by method 3 (pre and post-Test: P = 0.006, -2.5sec, 7.83%), than by method 2 (pre and Post-Test: P = 0.03; -1sec; 3.24%), while, performance remained unchanged in G1 (P > 0.05). To conclude, the setting of a difficult goal by GST is more effective to improve the chronometric performance in the 50m freestyle, but at the same time increased the values of the cognitive and somatic anxiety. For this, the mental trainers and the staff technical, invited to develop models of mental preparation associated with this method of setting a goal to help swimmers on the psychological level.Keywords: cognitive anxiety, goal setting, performance of swimming freestyle, self-confidence, somatic anxiety
Procedia PDF Downloads 12918535 A Comparative Study on Automatic Feature Classification Methods of Remote Sensing Images
Authors: Lee Jeong Min, Lee Mi Hee, Eo Yang Dam
Abstract:
Geospatial feature extraction is a very important issue in the remote sensing research. In the meantime, the image classification based on statistical techniques, but, in recent years, data mining and machine learning techniques for automated image processing technology is being applied to remote sensing it has focused on improved results generated possibility. In this study, artificial neural network and decision tree technique is applied to classify the high-resolution satellite images, as compared to the MLC processing result is a statistical technique and an analysis of the pros and cons between each of the techniques.Keywords: remote sensing, artificial neural network, decision tree, maximum likelihood classification
Procedia PDF Downloads 34718534 Exact Solutions for Steady Response of Nonlinear Systems under Non-White Excitation
Authors: Yaping Zhao
Abstract:
In the present study, the exact solutions for the steady response of quasi-linear systems under non-white wide-band random excitation are considered by means of the stochastic averaging method. The non linearity of the systems contains the power-law damping and the cross-product term of the power-law damping and displacement. The drift and diffusion coefficients of the Fokker-Planck-Kolmogorov (FPK) equation after averaging are obtained by a succinct approach. After solving the averaged FPK equation, the joint probability density function and the marginal probability density function in steady state are attained. In the process of resolving, the eigenvalue problem of ordinary differential equation is handled by integral equation method. Some new results are acquired and the novel method to deal with the problems in nonlinear random vibration is proposed.Keywords: random vibration, stochastic averaging method, FPK equation, transition probability density
Procedia PDF Downloads 50318533 The Effect of Conservative Tillage on Physical Properties of Soil and Yield of Rainfed Wheat
Authors: Abolfazl Hedayatipoor, Mohammad Younesi Alamooti
Abstract:
In order to study the effect of conservative tillage on a number of physical properties of soil and the yield of rainfed wheat, an experiment in the form of a randomized complete block design (RCBD) with three replications was conducted in a field in Aliabad County, Iran. The study treatments included: T1) Conventional method, T2) Combined moldboard plow method, T3) Chisel-packer method, and T4) Direct planting method. During early October, the study soil was prepared based on these treatments in a field which was used for rainfed wheat farming in the previous year. The apparent specific gravity of soil, weighted mean diameter (WMD) of soil aggregates, soil mechanical resistance, and soil permeability were measured. Data were analyzed in MSTAT-C. Results showed that the tillage practice had no significant effect on grain yield (p < 0.05). Soil permeability was 10.9, 16.3, 15.7 and 17.9 mm/h for T1, T2, T3 and T4, respectively.Keywords: rainfed agriculture, conservative tillage, energy consumption, wheat
Procedia PDF Downloads 20618532 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate
Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim
Abstract:
Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic
Procedia PDF Downloads 63918531 Development of In Situ Permeability Test Using Constant Discharge Method for Sandy Soils
Authors: A. Rifa’i, Y. Takeshita, M. Komatsu
Abstract:
The post-rain puddles problem that occurs in the first yard of Prambanan Temple are often disturbing visitor activity. A poodle layer and a drainage system has ever built to avoid such a problem, but puddles still didn’t stop appearing after rain. Permeability parameter needs to be determined by using more simple procedure to find exact method of solution. The instrument modelling were proposed according to the development of field permeability testing instrument. This experiment used proposed Constant Discharge method. Constant Discharge method used a tube poured with constant water flow. The procedure were carried out from unsaturated until saturated soil condition. Volumetric water content (θ) were being monitored by soil moisture measurement device. The results were relationship between k and θ which drawn by numerical approach Van Genutchen model. Parameters θr optimum value obtained from the test was at very dry soil. Coefficient of permeability with a density of 19.8 kN/m3 for unsaturated conditions was in range of 3 x 10-6 cm/sec (Sr= 68 %) until 9.98 x 10-4 cm/sec (Sr= 82 %). The equipment and testing procedure developed in this research was quite effective, simple and easy to be implemented on determining field soil permeability coefficient value of sandy soil. Using constant discharge method in proposed permeability test, value of permeability coefficient under unsaturated condition can be obtained without establish soil water characteristic curve.Keywords: constant discharge method, in situ permeability test, sandy soil, unsaturated conditions
Procedia PDF Downloads 38418530 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 14118529 Numerical Modelling of Dry Stone Masonry Structures Based on Finite-Discrete Element Method
Authors: Ž. Nikolić, H. Smoljanović, N. Živaljić
Abstract:
This paper presents numerical model based on finite-discrete element method for analysis of the structural response of dry stone masonry structures under static and dynamic loads. More precisely, each discrete stone block is discretized by finite elements. Material non-linearity including fracture and fragmentation of discrete elements as well as cyclic behavior during dynamic load are considered through contact elements which are implemented within a finite element mesh. The application of the model was conducted on several examples of these structures. The performed analysis shows high accuracy of the numerical results in comparison with the experimental ones and demonstrates the potential of the finite-discrete element method for modelling of the response of dry stone masonry structures.Keywords: dry stone masonry structures, dynamic load, finite-discrete element method, static load
Procedia PDF Downloads 41418528 Combining the Fictitious Stress Method and Displacement Discontinuity Method in Solving Crack Problems in Anisotropic Material
Authors: Bahatti̇n Ki̇mençe, Uğur Ki̇mençe
Abstract:
In this study, the purpose of obtaining the influence functions of the displacement discontinuity in an anisotropic elastic medium is to produce the boundary element equations. A Displacement Discontinuous Method formulation (DDM) is presented with the aim of modeling two-dimensional elastic fracture problems. This formulation is found by analytical integration of the fundamental solution along a straight-line crack. With this purpose, Kelvin's fundamental solutions for anisotropic media on an infinite plane are used to form dipoles from singular loads, and the various combinations of the said dipoles are used to obtain the influence functions of displacement discontinuity. This study introduces a technique for coupling Fictitious Stress Method (FSM) and DDM; the reason for applying this technique to some examples is to demonstrate the effectiveness of the proposed coupling method. In this study, displacement discontinuity equations are obtained by using dipole solutions calculated with known singular force solutions in an anisotropic medium. The displacement discontinuities method obtained from the solutions of these equations and the fictitious stress methods is combined and compared with various examples. In this study, one or more crack problems with various geometries in rectangular plates in finite and infinite regions, under the effect of tensile stress with coupled FSM and DDM in the anisotropic environment, were examined, and the effectiveness of the coupled method was demonstrated. Since crack problems can be modeled more easily with DDM, it has been observed that the use of DDM has increased recently. In obtaining the displacement discontinuity equations, Papkovitch functions were used in Crouch, and harmonic functions were chosen to satisfy various boundary conditions. A comparison is made between two indirect boundary element formulations, DDM, and an extension of FSM, for solving problems involving cracks. Several numerical examples are presented, and the outcomes are contrasted to existing analytical or reference outs.Keywords: displacement discontinuity method, fictitious stress method, crack problems, anisotropic material
Procedia PDF Downloads 7518527 Traumatic Brain Injury Induced Lipid Profiling of Lipids in Mice Serum Using UHPLC-Q-TOF-MS
Authors: Seema Dhariwal, Kiran Maan, Ruchi Baghel, Apoorva Sharma, Poonam Rana
Abstract:
Introduction: Traumatic brain injury (TBI) is defined as the temporary or permanent alteration in brain function and pathology caused by an external mechanical force. It represents the leading cause of mortality and morbidity among children and youth individuals. Various models of TBI in rodents have been developed in the laboratory to mimic the scenario of injury. Blast overpressure injury is common among civilians and military personnel, followed by accidents or explosive devices. In addition to this, the lateral Controlled cortical impact (CCI) model mimics the blunt, penetrating injury. Method: In the present study, we have developed two different mild TBI models using blast and CCI injury. In the blast model, helium gas was used to create an overpressure of 130 kPa (±5) via a shock tube, and CCI injury was induced with an impact depth of 1.5mm to create diffusive and focal injury, respectively. C57BL/6J male mice (10-12 weeks) were divided into three groups: (1) control, (2) Blast treated, (3) CCI treated, and were exposed to different injury models. Serum was collected on Day1 and day7, followed by biphasic extraction using MTBE/Methanol/Water. Prepared samples were separated on Charged Surface Hybrid (CSH) C18 column and acquired on UHPLC-Q-TOF-MS using ESI probe with inhouse optimized parameters and method. MS peak list was generated using Markerview TM. Data were normalized, Pareto-scaled, and log-transformed, followed by multivariate and univariate analysis in metaboanalyst. Result and discussion: Untargeted profiling of lipids generated extensive data features, which were annotated through LIPID MAPS® based on their m/z and were further confirmed based on their fragment pattern by LipidBlast. There is the final annotation of 269 features in the positive and 182 features in the negative mode of ionization. PCA and PLS-DA score plots showed clear segregation of injury groups to controls. Among various lipids in mild blast and CCI, five lipids (Glycerophospholipids {PC 30:2, PE O-33:3, PG 28:3;O3 and PS 36:1 } and fatty acyl { FA 21:3;O2}) were significantly altered in both injury groups at Day 1 and Day 7, and also had VIP score >1. Pathway analysis by Biopan has also shown hampered synthesis of Glycerolipids and Glycerophospholipiods, which coincides with earlier reports. It could be a direct result of alteration in the Acetylcholine signaling pathway in response to TBI. Understanding the role of a specific class of lipid metabolism, regulation and transport could be beneficial to TBI research since it could provide new targets and determine the best therapeutic intervention. This study demonstrates the potential lipid biomarkers which can be used for injury severity diagnosis and identification irrespective of injury type (diffusive or focal).Keywords: LipidBlast, lipidomic biomarker, LIPID MAPS®, TBI
Procedia PDF Downloads 11318526 A Novel Combination Method for Computing the Importance Map of Image
Authors: Ahmad Absetan, Mahdi Nooshyar
Abstract:
The importance map is an image-based measure and is a core part of the resizing algorithm. Importance measures include image gradients, saliency and entropy, as well as high level cues such as face detectors, motion detectors and more. In this work we proposed a new method to calculate the importance map, the importance map is generated automatically using a novel combination of image edge density and Harel saliency measurement. Experiments of different type images demonstrate that our method effectively detects prominent areas can be used in image resizing applications to aware important areas while preserving image quality.Keywords: content-aware image resizing, visual saliency, edge density, image warping
Procedia PDF Downloads 58218525 Speedup Breadth-First Search by Graph Ordering
Abstract:
Breadth-First Search(BFS) is a core graph algorithm that is widely used for graph analysis. As it is frequently used in many graph applications, improve the BFS performance is essential. In this paper, we present a graph ordering method that could reorder the graph nodes to achieve better data locality, thus, improving the BFS performance. Our method is based on an observation that the sibling relationships will dominate the cache access pattern during the BFS traversal. Therefore, we propose a frequency-based model to construct the graph order. First, we optimize the graph order according to the nodes’ visit frequency. Nodes with high visit frequency will be processed in priority. Second, we try to maximize the child nodes overlap layer by layer. As it is proved to be NP-hard, we propose a heuristic method that could greatly reduce the preprocessing overheads. We conduct extensive experiments on 16 real-world datasets. The result shows that our method could achieve comparable performance with the state-of-the-art methods while the graph ordering overheads are only about 1/15.Keywords: breadth-first search, BFS, graph ordering, graph algorithm
Procedia PDF Downloads 13818524 A Character Detection Method for Ancient Yi Books Based on Connected Components and Regressive Character Segmentation
Authors: Xu Han, Shanxiong Chen, Shiyu Zhu, Xiaoyu Lin, Fujia Zhao, Dingwang Wang
Abstract:
Character detection is an important issue for character recognition of ancient Yi books. The accuracy of detection directly affects the recognition effect of ancient Yi books. Considering the complex layout, the lack of standard typesetting and the mixed arrangement between images and texts, we propose a character detection method for ancient Yi books based on connected components and regressive character segmentation. First, the scanned images of ancient Yi books are preprocessed with nonlocal mean filtering, and then a modified local adaptive threshold binarization algorithm is used to obtain the binary images to segment the foreground and background for the images. Second, the non-text areas are removed by the method based on connected components. Finally, the single character in the ancient Yi books is segmented by our method. The experimental results show that the method can effectively separate the text areas and non-text areas for ancient Yi books and achieve higher accuracy and recall rate in the experiment of character detection, and effectively solve the problem of character detection and segmentation in character recognition of ancient books.Keywords: CCS concepts, computing methodologies, interest point, salient region detections, image segmentation
Procedia PDF Downloads 132