Search results for: spent normal ¬formyl -morpholine solvent.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1074

Search results for: spent normal ¬formyl -morpholine solvent.

924 Clustering Mixed Data Using Non-normal Regression Tree for Process Monitoring

Authors: Youngji Yoo, Cheong-Sool Park, Jun Seok Kim, Young-Hak Lee, Sung-Shick Kim, Jun-Geol Baek

Abstract:

In the semiconductor manufacturing process, large amounts of data are collected from various sensors of multiple facilities. The collected data from sensors have several different characteristics due to variables such as types of products, former processes and recipes. In general, Statistical Quality Control (SQC) methods assume the normality of the data to detect out-of-control states of processes. Although the collected data have different characteristics, using the data as inputs of SQC will increase variations of data, require wide control limits, and decrease performance to detect outof- control. Therefore, it is necessary to separate similar data groups from mixed data for more accurate process control. In the paper, we propose a regression tree using split algorithm based on Pearson distribution to handle non-normal distribution in parametric method. The regression tree finds similar properties of data from different variables. The experiments using real semiconductor manufacturing process data show improved performance in fault detecting ability.

Keywords: Semiconductor, non-normal mixed process data, clustering, Statistical Quality Control (SQC), regression tree, Pearson distribution system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732
923 Analysis of Lower Extremity Muscle Flexibility among Indian Classical Bharathnatyam Dancers

Authors: V. Anbarasi, David V Rajan, K. Adalarasu

Abstract:

Musculoskeletal problems are common in high performance dance population. This study attempts to identify lower extremity muscle flexibility parameters prevailing among bharatanatyam dancers and analyze if there is any significant difference exist between normal and injured dancers in flexibility parameters. Four hundred and one female dancers and 17 male dancers were participated in this study. Flexibility parameters (hamstring tightness, hip internal and external rotation and tendoachilles in supine and sitting posture) were measured using goniometer. Results of our study it is evident that injured female bharathnatyam dancers had significantly (p < 0.05) high hamstring tightness on left side lower extremity compared to normal female dancers. The range of motion for left tendoachilles was significantly (p < 0.05) high for the normal female group when compared to injured dancers during supine lying posture. Majority of the injured dancers had high hamstring tightness that could be a possible reason for pain and MSDs.

Keywords: External rotation (ER), Internal rotation (IR), Musculoskeletal disorder (MSD), Range of motion (ROM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5095
922 Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging

Authors: Sundararajan Sangeetha, Joseph Jesu Christopher, Swaminathan Ramakrishnan

Abstract:

In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.

Keywords: Image processing, planar radiographs, trabecular bone and wavelet analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454
921 Preliminary Toxicological Evaluations of Polypeptide-K Isolated from Momordica Charantia in Laboratory Rats

Authors: M Nazrul-Hakim, A Yaacob, Y Adam, A Zuraini

Abstract:

This study examined the toxicological effects and safety of polypeptide k isolated from the seeds of Momordica charantia in laboratory rats. 30 male Sprague Dawley rats (12 weeks old, bodyweight 180-200 g) were randomly divided into 3 groups (1000 mg/kg, 500 mg and 0 mg/kg). Rats were acclimatized to laboratory conditions for 7 days and at day 8 rats were dosed orally with polypeptide k (in 2% DMSO/normal saline) and the controls received the dosed vehicle only. Rats were then observed for 72 hours before sacrificed. Rats were anaesthetized by pentobarbital (50 mg/kg ip) and 2-3.0 mL of blood was taken by cardiac puncture and rats were scarified by anaesthetic overdose. Immediately, organs (heart, lungs, liver, kidneys) were weigh and taken for histology. Organ sections were then evaluated by a histopathologist. Serum samples were assayed for liver functions (ALT and γ-GT) and kidney functions (BUN and creatinine). All rats showed normal behavior after the dosing and no statistical changes were observed in all blood parameters and organ weight. Histological examinations revealed normal organ structures. In conclusion, dosing of rats up to 1000 mg/kg did not have any effects on the rat behavior, liver or kidney functions nor histology of the selected organs.

Keywords: Polypeptide k, safety, histology, toxicology, Momordica charantia

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1964
920 Effects of Process Parameters on the Yield of Oil from Coconut Fruit

Authors: Ndidi F. Amulu, Godian O. Mbah, Maxwel I. Onyiah, Callistus N. Ude

Abstract:

Analysis of the properties of coconut (Cocos nucifera) and its oil was evaluated in this work using standard analytical techniques. The analyses carried out include proximate composition of the fruit, extraction of oil from the fruit using different process parameters and physicochemical analysis of the extracted oil. The results showed the percentage (%) moisture, crude lipid, crude protein, ash and carbohydrate content of the coconut as 7.59, 55.15, 5.65, 7.35 and 19.51 respectively. The oil from the coconut fruit was odourless and yellowish liquid at room temperature (30oC). The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant differences (P<0.05) in the yield of oil from coconut flour. The oil yield ranged between 36.25%-49.83%. Lipid indices of the coconut oil indicated the acid value (AV) as 10.05Na0H/g of oil, free fatty acid (FFA) as 5.03%, saponification values (SV) as 183.26mgKOH-1g of oil, iodine value (IV) as 81.00 I2/g of oil, peroxide value (PV) as 5.00 ml/ g of oil and viscosity (V) as 0.002. A standard statistical package minitab version 16.0 program was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to generate various plots such as single effect plot, interactions effect plot and contour plot. The response or yield of oil from the coconut flour was used to develop a mathematical model that correlates the yield to the process variables studied. The maximum conditions obtained that gave the highest yield of coconut oil were leaching time of 2hrs, leaching temperature of 50oC and solute/solvent ratio of 0.05g/ml.

Keywords: Coconut, oil-extraction, optimization, physicochemical, proximate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2597
919 Implementation of Lower-Limb Rehabilitation System Using Attraction Motors with a Treadmill

Authors: Young-Lim Choi, Nak-Yun Choi, Jae-Yong Seo, Sang-Il Park, Jong-Wook Kim

Abstract:

This paper proposes a prototype of a lower-limb rehabilitation system for recovering and strengthening patients- injured lower limbs. The system is composed of traction motors for each leg position, a treadmill as a walking base, tension sensors, microcontrollers controlling motor functions and a main system with graphic user interface. For derivation of reference or normal velocity profiles of the body segment point, kinematic method is applied based on the humanoid robot model using the reference joint angle data of normal walking.

Keywords: Rehabilitation, lower limb, treadmill, humanoid robot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
918 Transcriptomics Analysis on Comparing Non-Small Cell Lung Cancer versus Normal Lung, and Early Stage Compared versus Late-Stages of Non-Small Cell Lung Cancer

Authors: Achitphol Chookaew, Paramee Thongsukhsai, Patamarerk Engsontia, Narongwit Nakwan, Pritsana Raugrut

Abstract:

Lung cancer is one of the most common malignancies and primary cause of death due to cancer worldwide. Non-small cell lung cancer (NSCLC) is the main subtype in which majority of patients present with advanced-stage disease. Herein, we analyzed differentially expressed genes to find potential biomarkers for lung cancer diagnosis as well as prognostic markers. We used transcriptome data from our 2 NSCLC patients and public data (GSE81089) composing of 8 NSCLC and 10 normal lung tissues. Differentially expressed genes (DEGs) between NSCLC and normal tissue and between early-stage and late-stage NSCLC were analyzed by the DESeq2. Pairwise correlation was used to find the DEGs with false discovery rate (FDR) adjusted p-value £ 0.05 and |log2 fold change| ³ 4 for NSCLC versus normal and FDR adjusted p-value £ 0.05 with |log2 fold change| ³ 2 for early versus late-stage NSCLC. Bioinformatic tools were used for functional and pathway analysis. Moreover, the top ten genes in each comparison group were verified the expression and survival analysis via GEPIA. We found 150 up-regulated and 45 down-regulated genes in NSCLC compared to normal tissues. Many immnunoglobulin-related genes e.g., IGHV4-4, IGHV5-10-1, IGHV4-31, IGHV4-61, and IGHV1-69D were significantly up-regulated. 22 genes were up-regulated, and five genes were down-regulated in late-stage compared to early-stage NSCLC. The top five DEGs genes were KRT6B, SPRR1A, KRT13, KRT6A and KRT5. Keratin 6B (KRT6B) was the most significantly increased gene in the late-stage NSCLC. From GEPIA analysis, we concluded that IGHV4-31 and IGKV1-9 might be used as diagnostic biomarkers, while KRT6B and KRT6A might be used as prognostic biomarkers. However, further clinical validation is needed.

Keywords: Bioinformatics, differentially expressed genes, non-small cell lung cancer, transcriptomics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811
917 Effect of Self-Compacting Concrete and Aggregate Size on Anchorage Performance at Highly Congested Reinforcement Regions

Authors: Umair Baig, Kohei Nagai

Abstract:

At highly congested reinforcement regions, which is common at beam-column joint area, clear spacing between parallel bars becomes less than maximum normal aggregate size (20mm) which has not been addressed in any design code and specifications. Limited clear spacing between parallel bars (herein after thin cover) is one of the causes which affect anchorage performance. In this study, an experimental investigation was carried out to understand anchorage performance of reinforcement in Self-Compacting Concrete (SCC) and Normal Concrete (NC) at highly congested regions under uni-axial tensile loading.  Column bar was pullout whereas; beam bars were offset from column reinforcement creating thin cover as per site condition. Two different sizes of coarse aggregate were used for NC (20mm and 10mm). Strain gauges were also installed along the bar in some specimens to understand the internal stress mechanism. Test results reveal that anchorage performance is affected at highly congested reinforcement region in NC with maximum aggregate size 20mm whereas; SCC and Small Aggregate (10mm) gives better structural performance. 

Keywords: Anchorage capacity, bond, Normal Concrete, self-compacting concrete.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3386
916 Pure Scalar Equilibria for Normal-Form Games

Authors: H. W. Corley

Abstract:

A scalar equilibrium (SE) is an alternative type of equilibrium in pure strategies for an n-person normal-form game G. It is defined using optimization techniques to obtain a pure strategy for each player of G by maximizing an appropriate utility function over the acceptable joint actions. The players’ actions are determined by the choice of the utility function. Such a utility function could be agreed upon by the players or chosen by an arbitrator. An SE is an equilibrium since no players of G can increase the value of this utility function by changing their strategies. SEs are formally defined, and examples are given. In a greedy SE, the goal is to assign actions to the players giving them the largest individual payoffs jointly possible. In a weighted SE, each player is assigned weights modeling the degree to which he helps every player, including himself, achieve as large a payoff as jointly possible. In a compromise SE, each player wants a fair payoff for a reasonable interpretation of fairness. In a parity SE, the players want their payoffs to be as nearly equal as jointly possible. Finally, a satisficing SE achieves a personal target payoff value for each player. The vector payoffs associated with each of these SEs are shown to be Pareto optimal among all such acceptable vectors, as well as computationally tractable.

Keywords: Compromise equilibrium, greedy equilibrium, normal-form game, parity equilibrium, pure strategies, satisficing equilibrium, scalar equilibria, utility function, weighted equilibrium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157
915 Decreasing Environmental Pollution in Superphosphate Production Using Apatite and Phosphorite Mixture

Authors: R. Guliyev

Abstract:

The enhanced need for food items is receiving more importance due to a gradual increase in the world population and, in this scenario, fertilizers play a very important role in agriculture. In this study, the production of the normal superphosphate was investigated with a continuous chamber method by adding potassium chloride to a mixture of Hibin apatite and Kingisepp phosphorite. In the experiments, the following parameters were selected: The concentration of sulfuric acid (54–66% (w/w)), the stoichiometric norm of sulfuric acid (100, 107, 110, 114% (w/w)), the ratio of apatite/phosphorite in the mixture of phosphate (95/5, 90/10, 85/15, 80/20, 75/25, 70/30, 65/35,60/40, 55/45, 50/50 (w/w)), potassium chloride/the mixture of phosphate (1/50, 2/50, 3/50,4/50, 5/50 (w/w)), and the reaction time (2–8 min). It was observed that by adding potassium chloride to a low-grade phosphorite and using it to substitute a fraction of high-grade apatite in the normal superphosphate production not only resulted in a high-quality product but also eliminated the waiting period for the maturation of superphosphate in the storage. The objective of this study was to produce a normal superphosphate fertilizer by using a continuous chamber method in order to accelerate the production process and to reduce the environmental pollution caused by fluoride gases by eliminating the maturation time in the storage.

Keywords: Continuous chamber method, environmental pollution, fluoride gases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 853
914 Molecular Dynamics and Circular Dichroism Studies on Aurein 1.2 and Retro Analog

Authors: Safyeh Soufian, Hoosein Naderi-Manesh, Abdoali Alizadeh, Mohammad Nabi Sarbolouki

Abstract:

Aurein 1.2 is a 13-residue amphipathic peptide with antibacterial and anticancer activity. Aurein1.2 and its retro analog were synthesized to study the activity of the peptides in relation to their structure. The antibacterial test result showed the retro-analog is inactive. The secondary structural analysis by CD spectra indicated that both of the peptides at TFE/Water adopt alpha-helical conformation. MD simulation was performed on aurein 1.2 and retro-analog in water and TFE in order to analyse the factors that are involved in the activity difference between retro and the native peptide. The simulation results are discussed and validated in the light of experimental data from the CD experiment. Both of the peptides showed a relatively similar pattern for their hydrophobicity, hydrophilicity, solvent accessible surfaces, and solvent accessible hydrophobic surfaces. However, they showed different in directions of dipole moment of peptides. Also, Our results further indicate that the reversion of the amino acid sequence affects flexibility .The data also showed that factors causing structural rigidity may decrease the activity. Consequently, our finding suggests that in the case of sequence-reversed peptide strategy, one has to pay attention to the role of amino acid sequence order in making flexibility and role of dipole moment direction in peptide activity. KeywordsAntimicrobial peptides, retro, molecular dynamic, circular dichroism.

Keywords: Antimicrobial peptides, retro, molecular dynamic, circular dichroism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
913 Feasibility Study for a Castor oil Extraction Plant in South Africa

Authors: Mohamed Belaid, Edison Muzenda, Getrude Mitilene, Mansoor Mollagee

Abstract:

A feasibility study for the design and construction of a pilot plant for the extraction of castor oil in South Africa was conducted. The study emphasized the four critical aspects of project feasibility analysis, namely technical, financial, market and managerial aspects. The technical aspect involved research on existing oil extraction technologies, namely: mechanical pressing and solvent extraction, as well as assessment of the proposed production site for both short and long term viability of the project. The site is on the outskirts of Nkomazi village in the Mpumalanga province, where connections for water and electricity are currently underway, potential raw material supply proves to be reliable since the province is known for its commercial farming. The managerial aspect was evaluated based on the fact that the current producer of castor oil will be fully involved in the project while receiving training and technical assistance from Sasol Technology, the TSC and SEDA. Market and financial aspects were evaluated and the project was considered financially viable with a Net Present Value (NPV) of R2 731 687 and an Internal Rate of Return (IRR) of 18% at an annual interest rate of 10.5%. The payback time is 6years for analysis over the first 10 years with a net income of R1 971 000 in the first year. The project was thus found to be feasible with high chance of success while contributing to socio-economic development. It was recommended for lab tests to be conducted to establish process kinetics that would be used in the initial design of the plant.

Keywords: Mechanical pressing, Net Present Value, Oilextraction, Project feasibility, Solvent extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6036
912 MHD Stagnation Point Flow towards a Shrinking Sheet with Suction in an Upper-Convected Maxwell (UCM) Fluid

Authors: K. Jafar, R. Nazar, A. Ishak, I. Pop

Abstract:

The present analysis considers the steady stagnation point flow and heat transfer towards a permeable shrinking sheet in an upper-convected Maxwell (UCM) electrically conducting fluid, with a constant magnetic field applied in the transverse direction to flow and a local heat generation within the boundary layer, with a heat generation rate proportional to (T-T\infty)p Using a similarity transformation, the governing system of partial differential equations is first transformed into a system of ordinary differential equations, which is then solved numerically using a finite-difference scheme known as the Keller-box method. Numerical results are obtained for the flow and thermal fields for various values of the stretching/shrinking parameter λ, the magnetic parameter M, the elastic parameter K, the Prandtl number Pr, the suction parameter s, the heat generation parameter Q, and the exponent p. The results indicate the existence of dual solutions for the shrinking sheet up to a critical value λc whose value depends on the value of M, K, and s. In the presence of internal heat absorption (Q<0)  the surface heat transfer rate decreases with increasing p but increases with parameters Q and s when the sheet is either stretched or shrunk.

Keywords: Magnetohydrodynamic (MHD), boundary layer flow, UCM fluid, stagnation point, shrinking sheet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2032
911 Optimization of Some Process Parameters to Produce Raisin Concentrate in Khorasan Region of Iran

Authors: Peiman Ariaii, Hamid Tavakolipour, Mohsen Pirdashti, Rabehe Izadi Amoli

Abstract:

Raisin Concentrate (RC) are the most important products obtained in the raisin processing industries. These RC products are now used to make the syrups, drinks and confectionery productions and introduced as natural substitute for sugar in food applications. Iran is a one of the biggest raisin exporter in the world but unfortunately despite a good raw material, no serious effort to extract the RC has been taken in Iran. Therefore, in this paper, we determined and analyzed affected parameters on extracting RC process and then optimizing these parameters for design the extracting RC process in two types of raisin (round and long) produced in Khorasan region. Two levels of solvent (1:1 and 2:1), three levels of extraction temperature (60°C, 70°C and 80°C), and three levels of concentration temperature (50°C, 60°C and 70°C) were the treatments. Finally physicochemical characteristics of the obtained concentrate such as color, viscosity, percentage of reduction sugar, acidity and the microbial tests (mould and yeast) were counted. The analysis was performed on the basis of factorial in the form of completely randomized design (CRD) and Duncan's multiple range test (DMRT) was used for the comparison of the means. Statistical analysis of results showed that optimal conditions for production of concentrate is round raisins when the solvent ratio was 2:1 with extraction temperature of 60°C and then concentration temperature of 50°C. Round raisin is cheaper than the long one, and it is more economical to concentrate production. Furthermore, round raisin has more aromas and the less color degree with increasing the temperature of concentration and extraction. Finally, according to mentioned factors the concentrate of round raisin is recommended.

Keywords: Raisin concentrate, optimization, process parameters, round raisin, Iran.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555
910 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)

Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude

Abstract:

Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.

Keywords: Coconut, melon, optimization, processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2106
909 The Performance of Predictive Classification Using Empirical Bayes

Authors: N. Deetae, S. Sukparungsee, Y. Areepong, K. Jampachaisri

Abstract:

This research is aimed to compare the percentages of correct classification of Empirical Bayes method (EB) to Classical method when data are constructed as near normal, short-tailed and long-tailed symmetric, short-tailed and long-tailed asymmetric. The study is performed using conjugate prior, normal distribution with known mean and unknown variance. The estimated hyper-parameters obtained from EB method are replaced in the posterior predictive probability and used to predict new observations. Data are generated, consisting of training set and test set with the sample sizes 100, 200 and 500 for the binary classification. The results showed that EB method exhibited an improved performance over Classical method in all situations under study.

Keywords: Classification, Empirical Bayes, Posterior predictive probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
908 On Simple Confidence Intervals for the Normal Mean with Known Coefficient of Variation

Authors: Suparat Niwitpong, Sa-aat Niwitpong

Abstract:

In this paper we proposed the new confidence interval for the normal population mean with known coefficient of variation. In practice, this situation occurs normally in environment and agriculture sciences where we know the standard deviation is proportional to the mean. As a result, the coefficient of variation of is known. We propose the new confidence interval based on the recent work of Khan [3] and this new confidence interval will compare with our previous work, see, e.g. Niwitpong [5]. We derive analytic expressions for the coverage probability and the expected length of each confidence interval. A numerical method will be used to assess the performance of these intervals based on their expected lengths.

Keywords: confidence interval, coverage probability, expected length, known coefficient of variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
907 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: Statistical slope stability analysis, Skew distributions, Probability of failure, Functions of random variables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497
906 Suggestion of Ultrasonic System for Diagnosis of Functional Gastrointestinal Disorders: Finite Difference Analysis, Development and Clinical Trials

Authors: Won-Pil Park, Qyoun-Jung Lee, Dae-Gon Woo, Chang-Yong Ko, Eun-Geun Kim, Dohyung Lim, Yong-Heum Lee, Tae-Min Shin, Han-Sung Kim

Abstract:

The disaster from functional gastrointestinal disorders has detrimental impact on the quality of life of the effected population and imposes a tremendous social and economic burden. There are, however, rare diagnostic methods for the functional gastrointestinal disorders. Our research group identified recently that the gastrointestinal tract well in the patients with the functional gastrointestinal disorders becomes more rigid than healthy people when palpating the abdominal regions overlaying the gastrointestinal tract. Objective of current study is, therefore, identify feasibility of a diagnostic system for the functional gastrointestinal disorders based on ultrasound technique, which can quantify the characteristics above. Two-dimensional finite difference (FD) models (one normal and two rigid model) were developed to analyze the reflective characteristic (displacement) on each soft-tissue layer responded after application of ultrasound signals. The FD analysis was then based on elastic ultrasound theory. Validation of the model was performed via comparison of the characteristic of the ultrasonic responses predicted by FD analysis with that determined from the actual specimens for the normal and rigid conditions. Based on the results from FD analysis, ultrasound system for diagnosis of the functional gastrointestinal disorders was developed and clinically tested via application of it to 40 human subjects with/without functional gastrointestinal disorders who were assigned to Normal and Patient Groups. The FD models were favorably validated. The results from FD analysis showed that the maximum displacement amplitude in the rigid models (0.12 and 0.16) at the interface between the fat and muscle layers was explicitly less than that in the normal model (0.29). The results from actual specimens showed that the maximum amplitude of the ultrasonic reflective signal in the rigid models (0.2±0.1Vp-p) at the interface between the fat and muscle layers was explicitly higher than that in the normal model (0.1±0.2 Vp-p). Clinical tests using our customized ultrasound system showed that the maximum amplitudes of the ultrasonic reflective signals near to the gastrointestinal tract well for the patient group (2.6±0.3 Vp-p) were generally higher than those in normal group (0.1±0.2 Vp-p). Here, maximum reflective signals was appeared at 20mm depth approximately from abdominal skin for all human subjects, corresponding to the location of the boundary layer close to gastrointestinal tract well. These findings suggest that our customized ultrasound system using the ultrasonic reflective signal may be helpful to the diagnosis of the functional gastrointestinal disorders.

Keywords: Finite Difference (FD) Analysis, FunctionalGastrointestinal Disorders, Gastrointestinal Tract, UltrasonicResponses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
905 Extension of a Smart Piezoelectric Ceramic Rod

Authors: Ali Reza Pouladkhan, Jalil Emadi, Hamed Habibolahiyan

Abstract:

This paper presents an exact solution and a finite element method (FEM) for a Piezoceramic Rod under static load. The cylindrical rod is made from polarized ceramics (piezoceramics) with axial poling. The lateral surface of the rod is traction-free and is unelectroded. The two end faces are under a uniform normal traction. Electrically, the two end faces are electroded with a circuit between the electrodes, which can be switched on or off. Two cases of open and shorted electrodes (short circuit and open circuit) will be considered. Finally, a finite element model will be used to compare the results with an exact solution. The study uses ABAQUS (v.6.7) software to derive the finite element model of the ceramic rod.

Keywords: Finite element method, Ceramic rod; Axial poling, Normal traction, Short circuit, Open circuit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896
904 Real-World PM, PN and NOx Emission Differences among DOC+CDPF Retrofit Diesel-, Diesel- and Natural Gas-Fueled Buses

Authors: Zhiwen Yang, Jingyuan Li, Zhenkai Xie, Jian Ling, Jiguang Wang, Mengliang Li

Abstract:

To reflect the influence of after-treatment system retrofit and natural gas-fueled vehicle replace on exhaust emissions emitted by urban buses, a portable emission measurement system (PEMS) was employed herein to conduct real driving emission measurements. This study investigated the differences in particle number (PN), particle mass (PM), and nitrogen oxides (NOx) emissions from a China IV diesel bus retrofitted by catalyzed diesel particulate filter (CDPF), a China IV diesel bus, and a China V natural gas bus. The results show that both tested diesel buses possess markedly advantages in NOx emission control when compared to the lean-burn natural gas bus equipped without any NOx after-treatment system. As to PN and PM, only the DOC+CDPF retrofitting diesel bus exhibits enormous benefits on emission control related to the natural gas bus, especially the normal diesel bus. Meanwhile, the differences in PM and PN emissions between retrofitted and normal diesel buses generally increase with the increase in vehicle specific power (VSP). Furthermore, the differences in PM emissions, especially those in the higher VSP ranges, are more significant than those in PN. In addition, the maximum peak PN particle size (32 nm) of the retrofitted diesel bus was significantly lower than that of the normal diesel bus (100 nm). These phenomena indicate that the CDPF retrofitting can effectively reduce diesel bus exhaust particle emissions, especially those with large particle sizes.

Keywords: CDPF, diesel, natural gas, real-world emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 392
903 Supplier Selection by Bi-Objectives Mixed Integer Program Approach

Authors: K.-H. Yang

Abstract:

In the past, there was a lot of excellent research studies conducted on topics related to supplier selection. Because the considered factors of supplier selection are complicated and difficult to be quantified, most researchers deal supplier selection issues by qualitative approaches. Compared to qualitative approaches, quantitative approaches are less applicable in the real world. This study tried to apply the quantitative approach to study a supplier selection problem with considering operation cost and delivery reliability. By those factors, this study applies Normalized Normal Constraint Method to solve the dual objectives mixed integer program of the supplier selection problem.

Keywords: Bi-objectives MIP, normalized normal constraint method, supplier selection, quantitative approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 939
902 A Two-Step, Temperature-Staged Direct Coal Liquefaction Process

Authors: Reyna Singh, David Lokhat, Milan Carsky

Abstract:

The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal remains an abundant resource. The aim of this work was to produce a high value hydrocarbon liquid product using a Direct Coal Liquefaction (DCL) process at, relatively mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated in a dual reactor lab-scale pilot plant facility. The objectives included maximising thermal dissolution of the coal in the presence of tetralin as the hydrogen donor solvent in the first stage with 2:1 and 3:1 solvent: coal ratios. Subsequently, in the second stage, hydrogen saturation, in particular, hydrodesulphurization (HDS) performance was assessed. Two commercial hydrotreating catalysts were investigated viz. NickelMolybdenum (Ni-Mo) and Cobalt-Molybdenum (Co-Mo). GC-MS results identified 77 compounds and various functional groups present in the first and second stage liquid product. In the first stage 3:1 ratios and liquid product yields catalysed by magnetite were favoured. The second stage product distribution showed an increase in the BTX (Benzene, Toluene, Xylene) quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, Ni-Mo had an improved performance over Co-Mo. Co-Mo is selective to a higher concentration of cyclohexane. For 16 days on stream each, Ni-Mo had a higher activity than Co-Mo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated. 

Keywords: Catalyst, coal, liquefaction, temperature-staged.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1599
901 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories

Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos

Abstract:

Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.

Keywords: Database, forensic genetics, genetic analysis, sample management, software solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1102
900 Effect of Concrete Strength and Aspect Ratio on Strength and Ductility of Concrete Columns

Authors: Mohamed A. Shanan, Ashraf H. El-Zanaty, Kamal G. Metwally

Abstract:

This paper presents the effect of concrete compressive strength and rectangularity ratio on strength and ductility of normal and high strength reinforced concrete columns confined with transverse steel under axial compressive loading. Nineteen normal strength concrete rectangular columns with different variables tested in this research were used to study the effect of concrete compressive strength and rectangularity ratio on strength and ductility of columns. The paper also presents a nonlinear finite element analysis for these specimens and another twenty high strength concrete square columns tested by other researchers using ANSYS 15 finite element software. The results indicate that the axial force – axial strain relationship obtained from the analytical model using ANSYS are in good agreement with the experimental data. The comparison shows that the ANSYS is capable of modeling and predicting the actual nonlinear behavior of confined normal and high-strength concrete columns under concentric loading. The maximum applied load and the maximum strain have also been confirmed to be satisfactory. Depending on this agreement between the experimental and analytical results, a parametric numerical study was conducted by ANSYS 15 to clarify and evaluate the effect of each variable on strength and ductility of the columns.

Keywords: ANSYS, concrete compressive strength effect, ductility, rectangularity ratio, strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853
899 Motor Gear Fault Diagnosis by Current, Noise and Vibration on AC Machine Considering Environment

Authors: Sun-Ki Hong, Ki-Seok Kim, Yong-Ho Cho

Abstract:

Lots of motors have been being used in industry. Therefore many researchers have studied about the failure diagnosis of motors. In this paper, the effect of measuring environment for diagnosis of gear fault connected to a motor shaft is studied. The fault diagnosis is executed through the comparison of normal gear and abnormal gear. The measured FFT data are compared with the normal data and analyzed for q-axis current, noise and vibration. For bad and good environment, the diagnosis results are compared. From these, it is shown that the bad measuring environment may not be able to detect exactly the motor gear fault. Therefore it is emphasized that the measuring environment should be carefully prepared.

Keywords: Motor fault, Diagnosis, FFT, Vibration, Noise, q-axis current, measuring environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2464
898 A Survey in Techniques for Imbalanced Intrusion Detection System Datasets

Authors: Najmeh Abedzadeh, Matthew Jacobs

Abstract:

An intrusion detection system (IDS) is a software application that monitors malicious activities and generates alerts if any are detected. However, most network activities in IDS datasets are normal, and the relatively few numbers of attacks make the available data imbalanced. Consequently, cyber-attacks can hide inside a large number of normal activities, and machine learning algorithms have difficulty learning and classifying the data correctly. In this paper, a comprehensive literature review is conducted on different types of algorithms for both implementing the IDS and methods in correcting the imbalanced IDS dataset. The most famous algorithms are machine learning (ML), deep learning (DL), synthetic minority over-sampling technique (SMOTE), and reinforcement learning (RL). Most of the research use the CSE-CIC-IDS2017, CSE-CIC-IDS2018, and NSL-KDD datasets for evaluating their algorithms.

Keywords: IDS, intrusion detection system, imbalanced datasets, sampling algorithms, big data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1012
897 A Combined Approach of a Sequential Life Testing and an Accelerated Life Testing Applied to a Low-Alloy High Strength Steel Component

Authors: D. I. De Souza, D. R. Fonseca, G. P. Azevedo

Abstract:

Sometimes the amount of time available for testing could be considerably less than the expected lifetime of the component. To overcome such a problem, there is the accelerated life-testing alternative aimed at forcing components to fail by testing them at much higher-than-intended application conditions. These models are known as acceleration models. One possible way to translate test results obtained under accelerated conditions to normal using conditions could be through the application of the “Maxwell Distribution Law.” In this paper we will apply a combined approach of a sequential life testing and an accelerated life testing to a low alloy high-strength steel component used in the construction of overpasses in Brazil. The underlying sampling distribution will be three-parameter Inverse Weibull model. To estimate the three parameters of the Inverse Weibull model we will use a maximum likelihood approach for censored failure data. We will be assuming a linear acceleration condition. To evaluate the accuracy (significance) of the parameter values obtained under normal conditions for the underlying Inverse Weibull model we will apply to the expected normal failure times a sequential life testing using a truncation mechanism. An example will illustrate the application of this procedure.

Keywords: Sequential Life Testing, Accelerated Life Testing, Underlying Three-Parameter Weibull Model, Maximum Likelihood Approach, Hypothesis Testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1607
896 Kernel’s Parameter Selection for Support Vector Domain Description

Authors: Mohamed EL Boujnouni, Mohamed Jedra, Noureddine Zahid

Abstract:

Support Vector Domain Description (SVDD) is one of the best-known one-class support vector learning methods, in which one tries the strategy of using balls defined on the feature space in order to distinguish a set of normal data from all other possible abnormal objects. As all kernel-based learning algorithms its performance depends heavily on the proper choice of the kernel parameter. This paper proposes a new approach to select kernel's parameter based on maximizing the distance between both gravity centers of normal and abnormal classes, and at the same time minimizing the variance within each class. The performance of the proposed algorithm is evaluated on several benchmarks. The experimental results demonstrate the feasibility and the effectiveness of the presented method.

Keywords: Gravity centers, Kernel’s parameter, Support Vector Domain Description, Variance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
895 Supplementary JAVA Programming Course for e-Learning with Small-Group Instruction

Authors: Eiko Takaoka, Yuji Osawa

Abstract:

We have designed and implemented e-Learning materials for a JAVA programming course since 2004 and have found that “normal” students, meaning motivated and capable students, can successfully learn the course material taught in a fully online manner. However, for “weaker” students, meaning those lacking motivation, experience, and/or aptitude, the results have been unsatisfactory, and such students thus fall into the supplementary category. From 2007 to 2008, we offered a face-to-face class with small-group instruction for the weaker students, while we provided the fully online course for the normal students. Consequently, we succeeded in helping the weaker students to overcome their programming phobia and develop the ability to create basic programs.

Keywords: e-learning, JAVA Programming Course, Small-Group Instruction, Supplementary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695