Search results for: Preliminary feasibility study.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13192

Search results for: Preliminary feasibility study.

12562 Developing an Advanced Algorithm Capable of Classifying News, Articles and Other Textual Documents Using Text Mining Techniques

Authors: R. B. Knudsen, O. T. Rasmussen, R. A. Alphinas

Abstract:

The reason for conducting this research is to develop an algorithm that is capable of classifying news articles from the automobile industry, according to the competitive actions that they entail, with the use of Text Mining (TM) methods. It is needed to test how to properly preprocess the data for this research by preparing pipelines which fits each algorithm the best. The pipelines are tested along with nine different classification algorithms in the realm of regression, support vector machines, and neural networks. Preliminary testing for identifying the optimal pipelines and algorithms resulted in the selection of two algorithms with two different pipelines. The two algorithms are Logistic Regression (LR) and Artificial Neural Network (ANN). These algorithms are optimized further, where several parameters of each algorithm are tested. The best result is achieved with the ANN. The final model yields an accuracy of 0.79, a precision of 0.80, a recall of 0.78, and an F1 score of 0.76. By removing three of the classes that created noise, the final algorithm is capable of reaching an accuracy of 94%.

Keywords: Artificial neural network, competitive dynamics, logistic regression, text classification, text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 498
12561 Classifier Based Text Mining for Neural Network

Authors: M. Govindarajan, R. M. Chandrasekaran

Abstract:

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.

Keywords: Back propagation, classification accuracy, textmining, time complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4188
12560 Public Economic Efficiency and Case-Based Reasoning: A Theoretical Framework to Police Performance

Authors: Javier Parra-Domínguez, Juan Manuel Corchado

Abstract:

At present, public efficiency is a concept that intends to maximize return on public investment focus on minimizing the use of resources and maximizing the outputs. The concept takes into account statistical criteria drawn up according to techniques such as DEA (Data Envelopment Analysis). The purpose of the current work is to consider, more precisely, the theoretical application of CBR (Case-Based Reasoning) from economics and computer science, as a preliminary step to improving the efficiency of law enforcement agencies (public sector). With the aim of increasing the efficiency of the public sector, we have entered into a phase whose main objective is the implementation of new technologies. Our main conclusion is that the application of computer techniques, such as CBR, has become key to the efficiency of the public sector, which continues to require economic valuation based on methodologies such as DEA. As a theoretical result and conclusion, the incorporation of CBR systems will reduce the number of inputs and increase, theoretically, the number of outputs generated based on previous computer knowledge.

Keywords: Case-based reasoning, knowledge, police, public efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 581
12559 Evaluation of Numerical Modeling of Jet Grouting Design Using in situ Loading Test

Authors: Reza Ziaie Moayed, Ehsan Azini

Abstract:

Jet grouting (JG) is one of the methods of improving and increasing the strength and bearing of soil in which the high pressure water or grout is injected through the nozzles into the soil. During this process, a part of the soil and grout particles comes out of the drill borehole, and the other part is mixed up with the grout in place, as a result of this process, a mass of modified soil is created. The purpose of this method is to change the soil into a mixture of soil and cement, commonly known as "soil-cement". In this paper, first, the principles of high pressure injection and then the effective parameters in the JG method are described. Then, the tests on the samples taken from the columns formed from the excavation around the soil-cement columns, as well as the static loading test on the created column, are discussed. In the other part of this paper, the soil behavior models for numerical modeling in PLAXIS software are mentioned. The purpose of this paper is to evaluate the results of numerical modeling based on in-situ static loading tests. The results indicate an acceptable agreement between the results of the tests mentioned and the modeling results. Also, modeling with this software as an appropriate option for technical feasibility can be used to soil improvement using JG.

Keywords: Jet grouting column, Soil improvement, Numerical modeling, In-situ loading test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 990
12558 Feasibility Investigation of Near Infrared Spectrometry for Particle Size Estimation of Nano Structures

Authors: A. Bagheri Garmarudi, M. Khanmohammadi, N. Khoddami, K. Shabani

Abstract:

Determination of nano particle size is substantial since the nano particle size exerts a significant effect on various properties of nano materials. Accordingly, proposing non-destructive, accurate and rapid techniques for this aim is of high interest. There are some conventional techniques to investigate the morphology and grain size of nano particles such as scanning electron microscopy (SEM), atomic force microscopy (AFM) and X-ray diffractometry (XRD). Vibrational spectroscopy is utilized to characterize different compounds and applied for evaluation of the average particle size based on relationship between particle size and near infrared spectra [1,4] , but it has never been applied in quantitative morphological analysis of nano materials. So far, the potential application of nearinfrared (NIR) spectroscopy with its ability in rapid analysis of powdered materials with minimal sample preparation, has been suggested for particle size determination of powdered pharmaceuticals. The relationship between particle size and diffuse reflectance (DR) spectra in near infrared region has been applied to introduce a method for estimation of particle size. Back propagation artificial neural network (BP-ANN) as a nonlinear model was applied to estimate average particle size based on near infrared diffuse reflectance spectra. Thirty five different nano TiO2 samples with different particle size were analyzed by DR-FTNIR spectrometry and the obtained data were processed by BP- ANN.

Keywords: near infrared, particle size, chemometrics, neuralnetwork, nano structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817
12557 Biosorption of Cu (II) and Zn (II) from Real Wastewater onto Cajanus cajan Husk

Authors: Mallappa A. Devani, John U. Kennedy Oubagaranadin, Basudeb Munshi

Abstract:

In this preliminary work, locally available husk of Cajanus cajan (commonly known in India as Tur or Arhar), a bio-waste, has been used in its physically treated and chemically activated form for the removal of binary Cu (II) and Zn(II) ions from the real waste water obtained from an electroplating industry in Bangalore, Karnataka, India and from laboratory prepared binary solutions having almost similar composition of the metal ions, for comparison. The real wastewater after filtration and dilution for five times was used for biosorption studies at the normal pH of the solutions at room temperature. Langmuir's binary model was used to calculate the metal uptake capacities of the biosorbents. It was observed that Cu(II) is more competitive than Zn(II) in biosorption. In individual metal biosorption, Cu(II) uptake was found to be more than that of the Zn(II) and a similar trend was observed in the binary metal biosorption from real wastewater and laboratory prepared solutions. FTIR analysis was carried out to identify the functional groups in the industrial wastewater and EDAX for the elemental analysis of the biosorbents after experiments.

Keywords: Biosorption, Cajanus cajan, multi metal remediation, wastewater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918
12556 Load Discontinuity in Shock Response and Its Remedies

Authors: Shuenn-Yih Chang, Chiu-Li Huang

Abstract:

It has been shown that a load discontinuity at the end of an impulse will result in an extra impulse and hence an extra amplitude distortion if a step-by-step integration method is employed to yield the shock response. In order to overcome this difficulty, three remedies are proposed to reduce the extra amplitude distortion. The first remedy is to solve the momentum equation of motion instead of the force equation of motion in the step-by-step solution of the shock response, where an external momentum is used in the solution of the momentum equation of motion. Since the external momentum is a resultant of the time integration of external force, the problem of load discontinuity will automatically disappear. The second remedy is to perform a single small time step immediately upon termination of the applied impulse while the other time steps can still be conducted by using the time step determined from general considerations. This is because that the extra impulse caused by a load discontinuity at the end of an impulse is almost linearly proportional to the step size. Finally, the third remedy is to use the average value of the two different values at the integration point of the load discontinuity to replace the use of one of them for loading input. The basic motivation of this remedy originates from the concept of no loading input error associated with the integration point of load discontinuity. The feasibility of the three remedies are analytically explained and numerically illustrated.

Keywords: Dynamic analysis, load discontinuity, shock response, step-by-step integration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1312
12555 The Relationship of Knowledge Management Practices, Competencies and the Organizational Performance of Government Departments in Malaysia

Authors: Raja Suzana Raja Kasim

Abstract:

This paper attempts to highlight the significant role of knowledge management practices (KMP) and competencies in improving the performance and efficiency of public sector organizations. It appears that public sector organizations in developing countries have not received much attention in the research literature of knowledge management and competencies. Therefore, this paper seeks to explore the role of KMP and competencies in achieving superior performance among public sector organizations in Malaysia in the broader perspective. Survey questionnaires were distributed to all Administrative and Diplomatic Officers (ADS) from 28 ministries located in Putrajaya, Malaysia. This paper also examines preliminary empirical results on the relationship between support for knowledge management practices, competencies, and orientation in Malaysia-s public organizations. This paper supports the notion that the practices of knowledge management at the organizational level are a prerequisite for successful organizational performance. In conclusion, the results not only have the potential to contribute theoretically to both management strategy and knowledge management field literature but also to the area of organizational performance.

Keywords: knowledge, knowledge management practices, competencies, organizational performance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2217
12554 Flood Control Structures in the River Göta Älv to Protect Gothenburg City (Sweden) during the 21st Century - Preliminary Evaluation

Authors: M. Irannezhad, E. H. N. Gashti, U. Moback, B. Kløve

Abstract:

Climate change would cause mean sea level to rise +1 m by 2100. To prevent coastal floods resulting from the sea level rising, different flood control structures have been built, with acceptable protection levels. Gothenburg with the River Göta älv located on the southwest coast of Sweden is a vulnerable city to the accelerated rises in mean sea level. We evaluated using a sea barrage in the River Göta älv to protect Gothenburg during this century. The highest sea level was estimated to 2.95 m above the current mean sea level by 2100. To verify flood protection against such high sea levels, both barriers have to be closed. To prevent high water level in the River Göta älv reservoir, the barriers would be open when the sea level is low. The suggested flood control structures would successfully protect the city from flooding events during this century.

Keywords: Climate change, Flood control structures, Gothenburg, Sea level rising, Water level model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4716
12553 A Group Setting of IED in Microgrid Protection Management System

Authors: Jyh-Cherng Gu, Ming-Ta Yang, Chao-Fong Yan, Hsin-Yung Chung, Yung-Ruei Chang, Yih-Der Lee, Chen-Min Chan, Chia-Hao Hsu

Abstract:

There are a number of Distributed Generations (DGs) installed in microgrid, which may have diverse path and direction of power flow or fault current. The overcurrent protection scheme for the traditional radial type distribution system will no longer meet the needs of microgrid protection. Integrating the Intelligent Electronic Device (IED) and a Supervisory Control and Data Acquisition (SCADA) with IEC 61850 communication protocol, the paper proposes a Microgrid Protection Management System (MPMS) to protect power system from the fault. In the proposed method, the MPMS performs logic programming of each IED to coordinate their tripping sequence. The GOOSE message defined in IEC 61850 is used as the transmission information medium among IEDs. Moreover, to cope with the difference in fault current of microgrid between grid-connected mode and islanded mode, the proposed MPMS applies the group setting feature of IED to protect system and robust adaptability. Once the microgrid topology varies, the MPMS will recalculate the fault current and update the group setting of IED. Provided there is a fault, IEDs will isolate the fault at once. Finally, the Matlab/Simulink and Elipse Power Studio software are used to simulate and demonstrate the feasibility of the proposed method.

Keywords: IEC 61850, IED, Group Setting, Microgrid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2249
12552 Performance Optimization of Data Mining Application Using Radial Basis Function Classifier

Authors: M. Govindarajan, R. M.Chandrasekaran

Abstract:

Text data mining is a process of exploratory data analysis. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. This paper describes proposed radial basis function Classifier that performs comparative crossvalidation for existing radial basis function Classifier. The feasibility and the benefits of the proposed approach are demonstrated by means of data mining problem: direct Marketing. Direct marketing has become an important application field of data mining. Comparative Cross-validation involves estimation of accuracy by either stratified k-fold cross-validation or equivalent repeated random subsampling. While the proposed method may have high bias; its performance (accuracy estimation in our case) may be poor due to high variance. Thus the accuracy with proposed radial basis function Classifier was less than with the existing radial basis function Classifier. However there is smaller the improvement in runtime and larger improvement in precision and recall. In the proposed method Classification accuracy and prediction accuracy are determined where the prediction accuracy is comparatively high.

Keywords: Text Data Mining, Comparative Cross-validation, Radial Basis Function, runtime, accuracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530
12551 Cytotoxic Effects of Engineered Nanoparticles in Human Mesenchymal Stem Cells

Authors: Ali A. Alshatwi, Vaiyapuri S. Periasamy, Jegan Athinarayanan

Abstract:

Engineered nanoparticles’ usage rapidly increased in various applications in the last decade due to their unusual properties. However, there is an ever increasing concern to understand their toxicological effect in human health. Particularly, metal and metal oxide nanoparticles have been used in various sectors including biomedical, food and agriculture. But their impact on human health is yet to be fully understood. In this present investigation, we assessed the toxic effect of engineered nanoparticles (ENPs) including Ag, MgO and Co3O4 nanoparticles (NPs) on human mesenchymal stem cells (hMSC) adopting cell viability and cellular morphological changes as tools The results suggested that silver NPs are more toxic than MgO and Co3O4NPs. The ENPs induced cytotoxicity and nuclear morphological changes in hMSC depending on dose. The cell viability decreases with increase in concentration of ENPs. The cellular morphology studies revealed that ENPs damaged the cells. These preliminary findings have implications for the use of these nanoparticles in food industry with systematic regulations.

Keywords: Cobalt oxide, Human mesenchymal stem cells, MgO, Silver.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2388
12550 Economical and Technical Analysis of Urban Transit System Selection Using TOPSIS Method According to Constructional and Operational Aspects

Authors: Ali Abdi Kordani, Meysam Rooyintan, Sid Mohammad Boroomandrad

Abstract:

Nowadays, one the most important problems in megacities is public transportation and satisfying citizens from this system in order to decrease the traffic congestions and air pollution. Accordingly, to improve the transit passengers and increase the travel safety, new transportation systems such as Bus Rapid Transit (BRT), tram, and monorail have expanded that each one has different merits and demerits. That is why comparing different systems for a systematic selection of public transportation systems in a big city like Tehran, which has numerous problems in terms of traffic and pollution, is essential. In this paper, it is tried to investigate the advantages and feasibility of using monorail, tram and BRT systems, which are widely used in most of megacities in all over the world. In Tehran, by using SPSS statistical analysis software and TOPSIS method, these three modes are compared to each other and their results will be assessed. Experts, who are experienced in the transportation field, answer the prepared matrix questionnaire to select each public transportation mode (tram, monorail, and BRT). The results according to experts’ judgments represent that monorail has the first priority, Tram has the second one, and BRT has the third one according to the considered indices like execution costs, wasting time, depreciation, pollution, operation costs, travel time, passenger satisfaction, benefit to cost ratio and traffic congestion.

Keywords: Bus Rapid Transit, Costs, Monorail, Pollution, Tram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 649
12549 Analysis of Mechanical Properties for AP/HTPB Solid Propellant under Different Loading Conditions

Authors: Walid M. Adel, Liang Guo-Zhu

Abstract:

To investigate the characterization of the mechanical properties of composite solid propellant (CSP) based on hydroxyl-terminated polybutadiene (HTPB) at different temperatures and strain rates, uniaxial tensile tests were conducted over a range of temperatures -60 °C to +76 °C and strain rates 0.000164 to 0.328084 s-1 using a conventional universal testing machine. From the experimental data, it can be noted that the mechanical properties of AP/HTPB propellant are mainly dependent on the applied strain rate and the temperature condition. The stress-strain responses exhibited an initial yielding followed by the viscoelastic phase, which was strongly affected by the strain rate and temperature. It was found that the mechanical properties increased with both increasing strain rate and decreasing temperature. Based on the experimental tests, the master curves of the tensile properties are drawn using predetermined shift factor and the results were discussed. This work is a first step in preliminary investigation the nonlinear viscoelasticity behavior of CSP.

Keywords: AP/HTPB composite solid propellant, mechanical behavior, nonlinear viscoelastic, tensile test, master curves.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2005
12548 Italians- Social and Emotional Loneliness: The Results of Five Studies

Authors: Vanda Lucia Zammuner

Abstract:

Subjective loneliness describes people who feel a disagreeable or unacceptable lack of meaningful social relationships, both at the quantitative and qualitative level. The studies to be presented tested an Italian 18-items self-report loneliness measure, that included items adapted from scales previously developed, namely a short version of the UCLA (Russell, Peplau and Cutrona, 1980), and the 11-items Loneliness scale by De Jong-Gierveld & Kamphuis (JGLS; 1985). The studies aimed at testing the developed scale and at verifying whether loneliness is better conceptualized as a unidimensional (so-called 'general loneliness') or a bidimensional construct, namely comprising the distinct facets of social and emotional loneliness. The loneliness questionnaire included 2 singleitem criterion measures of sad mood, and social contact, and asked participants to supply information on a number of socio-demographic variables. Factorial analyses of responses obtained in two preliminary studies, with 59 and 143 Italian participants respectively, showed good factor loadings and subscale reliability and confirmed that perceived loneliness has clearly two components, a social and an emotional one, the latter measured by two subscales, a 7-item 'general' loneliness subscale derived from UCLA, and a 6–item 'emotional' scale included in the JGLS. Results further showed that type and amount of loneliness are related, negatively, to frequency of social contacts, and, positively, to sad mood. In a third study data were obtained from a nation-wide sample of 9.097 Italian subjects, 12 to about 70 year-olds, who filled the test on-line, on the Italian web site of a large-audience magazine, Focus. The results again confirmed the reliability of the component subscales, namely social, emotional, and 'general' loneliness, and showed that they were highly correlated with each other, especially the latter two. Loneliness scores were significantly predicted by sex, age, education level, sad mood and social contact, and, less so, by other variables – e.g., geographical area and profession. The scale validity was confirmed by the results of a fourth study, with elderly men and women (N 105) living at home or in residential care units. The three subscales were significantly related, among others, to depression, and to various measures of the extension of, and satisfaction with, social contacts with relatives and friends. Finally, a fifth study with 315 career-starters showed that social and emotional loneliness correlate with life satisfaction, and with measures of emotional intelligence. Altogether the results showed a good validity and reliability in the tested samples of the entire scale, and of its components.

Keywords: Emotional loneliness, social loneliness, scale development and testing, life span and cultural differences.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2958
12547 Separating Permanent and Induced Magnetic Signature: A Simple Approach

Authors: O. J. G. Somsen, G. P. M. Wagemakers

Abstract:

Magnetic signature detection provides sensitive detection of metal objects, especially in the natural environment. Our group is developing a tabletop setup for magnetic signatures of various small and model objects. A particular issue is the separation of permanent and induced magnetization. While the latter depends only on the composition and shape of the object, the former also depends on the magnetization history. With common deperming techniques, a significant permanent signature may still remain, which confuses measurements of the induced component. We investigate a basic technique of separating the two. Measurements were done by moving the object along an aluminum rail while the three field components are recorded by a detector attached near the center. This is done first with the rail parallel to the Earth magnetic field and then with anti-parallel orientation. The reversal changes the sign of the induced- but not the permanent magnetization so that the two can be separated. Our preliminary results on a small iron block show excellent reproducibility. A considerable permanent magnetization was indeed present, resulting in a complex asymmetric signature. After separation, a much more symmetric induced signature was obtained that can be studied in detail and compared with theoretical calculations.

Keywords: Magnetic signature, data analysis, magnetization, deperming techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1042
12546 Split-Pipe Design of Water Distribution Network Using Simulated Annealing

Authors: J. Tospornsampan, I. Kita, M. Ishii, Y. Kitamura

Abstract:

In this paper a procedure for the split-pipe design of looped water distribution network based on the use of simulated annealing is proposed. Simulated annealing is a heuristic-based search algorithm, motivated by an analogy of physical annealing in solids. It is capable for solving the combinatorial optimization problem. In contrast to the split-pipe design that is derived from a continuous diameter design that has been implemented in conventional optimization techniques, the split-pipe design proposed in this paper is derived from a discrete diameter design where a set of pipe diameters is chosen directly from a specified set of commercial pipes. The optimality and feasibility of the solutions are found to be guaranteed by using the proposed method. The performance of the proposed procedure is demonstrated through solving the three well-known problems of water distribution network taken from the literature. Simulated annealing provides very promising solutions and the lowest-cost solutions are found for all of these test problems. The results obtained from these applications show that simulated annealing is able to handle a combinatorial optimization problem of the least cost design of water distribution network. The technique can be considered as an alternative tool for similar areas of research. Further applications and improvements of the technique are expected as well.

Keywords: Combinatorial problem, Heuristics, Least-cost design, Looped network, Pipe network, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2655
12545 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners

Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani

Abstract:

Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.

Keywords: Accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1142
12544 Honey Contamination in the Republic of Kazakhstan

Authors: B. Sadepovich Maikanov, Z. Shabanbayevich Adilbekov, R. Husainovna Mustafina, L. Tyulegenovna Auteleyeva

Abstract:

This study involves detailed information about contaminants of honey in the Republic of Kazakhstan. The requirements of the technical regulation ‘Requirements to safety of honey and bee products’ and GOST 19792-2001 were taken into account in this research. Contamination of honey by antibiotics wqs determined by the IEA (immune-enzyme analysis), Ridder analyzer and Tecna produced test systems. Voltammetry (TaLab device) was used to define contamination by salts of heavy metals and gamma-beta spectrometry, ‘Progress BG’ system, with preliminary ashing of the sample of honey was used to define radioactive contamination. This article pointed out that residues of chloramphenicol were detected in 24% of investigated products, in 22% of them –streptomycin, in 7.3% - sulfanilamide, in 2.4% - tylosin, and in 12% - combined contamination was noted. Geographically, the greatest degree of contamination of honey with antibiotics occurs in the Northern Kazakhstan – 54.4%, and Southern Kazakhstan - 50%, and the lowest in Central and Eastern Kazakhstan with 30% and 25%, respectively. Generally, pollution by heavy metals is within acceptable limits, but the contamination from lead is highest in the Akmola region. The level of radioactive cesium and strontium is also within acceptable concentrations. The highest radioactivity in terms of cesium was observed in the East Kazakhstan region - 49.00±10 Bq/kg, in Akmola, North Kazakhstan and Almaty - 12.00±5, 11.05±3 and 19.0±8 Bq/kg, respectively, while the norm is 100 Bq/kg. In terms of strontium, the radioactivity in the East Kazakhstan region is 25.03±15 Bq/kg, while in Akmola, North Kazakhstan and Almaty regions it is 12.00±3, 10.2±4 and 1.0±2 Bq/kg, respectively, with the norm of 80 Bq/kg. This accumulation is mainly associated with the environmental degradation, feeding and treating of bees. Moreover, in the process of collecting nectar, external substances can penetrate honey. Overall, this research determines factors and reasons of honey contamination.

Keywords: Antibiotics, contamination of honey, honey, radionuclides.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
12543 The Potential of 48V HEV in Real Driving

Authors: Mark Schudeleit, Christian Sieg, Ferit Küçükay

Abstract:

This paper describes how to dimension the electric components of a 48V hybrid system considering real customer use. Furthermore, it provides information about savings in energy and CO2 emissions by a customer-tailored 48V hybrid. Based on measured customer profiles, the electric units such as the electric motor and the energy storage are dimensioned. Furthermore, the CO2 reduction potential in real customer use is determined compared to conventional vehicles. Finally, investigations are carried out to specify the topology design and preliminary considerations in order to hybridize a conventional vehicle with a 48V hybrid system. The emission model results from an empiric approach also taking into account the effects of engine dynamics on emissions. We analyzed transient engine emissions during representative customer driving profiles and created emission meta models. The investigation showed a significant difference in emissions when simulating realistic customer driving profiles using the created verified meta models compared to static approaches which are commonly used for vehicle simulation.

Keywords: Customer use, dimensioning, hybrid electric vehicles, vehicle simulation, 48V hybrid system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3541
12542 A Meta-Model for Tubercle Design of Wing Planforms Inspired by Humpback Whale Flippers

Authors: A. Taheri

Abstract:

Inspired by topology of humpback whale flippers, a meta-model is designed for wing planform design. The net is trained based on experimental data using cascade-forward artificial neural network (ANN) to investigate effects of the amplitude and wavelength of sinusoidal leading edge configurations on the wing performance. Afterwards, the trained ANN is coupled with a genetic algorithm method towards an optimum design strategy. Finally, flow physics of the problem for an optimized rectangular planform and also a real flipper geometry planform is simulated using Lam-Bremhorst low Reynolds number turbulence model with damping wall-functions resolving to the wall. Lift and drag coefficients and also details of flow are presented along with comparisons to available experimental data. Results show that the proposed strategy can be adopted with success as a fast-estimation tool for performance prediction of wing planforms with wavy leading edge at preliminary design phase.  

Keywords: Humpback whale flipper, cascade-forward ANN, GA, CFD, Bionics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3496
12541 Optical Flow Technique for Supersonic Jet Measurements

Authors: H. D. Lim, Jie Wu, T. H. New, Shengxian Shi

Abstract:

This paper outlines the development of an experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 4 bar and exit Mach of 1.5. High-speed singleframe or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Despite these challenges however, this supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.

Keywords: Schlieren, optical flow, supersonic jets, shock shear layer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1879
12540 Using Trip Planners in Developing Proper Transportation Behavior

Authors: Grzegorz Sierpiński, Ireneusz Celiński, Marcin Staniek

Abstract:

The article discusses multimodal mobility in contemporary societies as a main planning and organization issue in the functioning of administrative bodies, a problem which really exists in the space of contemporary cities in terms of shaping modern transport systems. The article presents classification of available resources and initiatives undertaken for developing multimodal mobility. Solutions can be divided into three groups of measures – physical measures in the form of changes of the transport network infrastructure, organizational ones (including transport policy) and information measures. The latter ones include in particular direct support for people travelling in the transport network by providing information about ways of using available means of transport. A special measure contributing to this end is a trip planner. The article compares several selected planners. It includes a short description of the Green Travelling Project, which aims at developing a planner supporting environmentally friendly solutions in terms of transport network operation. The article summarizes preliminary findings of the project.

Keywords: Mobility, modal split, multimodal trip, multimodal platforms, sustainable transport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1864
12539 Some Issues on Integrating Telepresence Technology into Industrial Robotic Assembly

Authors: Gunther Reinhart, Marwan Radi

Abstract:

Since the 1940s, many promising telepresence research results have been obtained. However, telepresence technology still has not reached industrial usage. As human intelligence is necessary for successful execution of most manual assembly tasks, the ability of the human is hindered in some cases, such as the assembly of heavy parts of small/medium lots or prototypes. In such a case of manual assembly, the help of industrial robots is mandatory. The telepresence technology can be considered as a solution for performing assembly tasks, where the human intelligence and haptic sense are needed to identify and minimize the errors during an assembly process and a robot is needed to carry heavy parts. In this paper, preliminary steps to integrate the telepresence technology into industrial robot systems are introduced. The system described here combines both, the human haptic sense and the industrial robot capability to perform a manual assembly task remotely using a force feedback joystick. Mapping between the joystick-s Degrees of Freedom (DOF) and the robot-s ones are introduced. Simulation and experimental results are shown and future work is discussed.

Keywords: Assembly, Force Feedback, Industrial Robot, Teleassembly, Telepresence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1226
12538 An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning

Authors: Yanwen Li, Shuguo Xie

Abstract:

In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method.

Keywords: Gradient image, segmentation and extract, mean-shift algorithm, dictionary learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
12537 Preoperative to Intraoperative Space Registration for Management of Head Injuries

Authors: M. Gooroochurn, M. Ovinis, D. Kerr, K. Bouazza-Marouf, M. Vloeberghs

Abstract:

A registration framework for image-guided robotic surgery is proposed for three emergency neurosurgical procedures, namely Intracranial Pressure (ICP) Monitoring, External Ventricular Drainage (EVD) and evacuation of a Chronic Subdural Haematoma (CSDH). The registration paradigm uses CT and white light as modalities. This paper presents two simulation studies for a preliminary evaluation of the registration protocol: (1) The loci of the Target Registration Error (TRE) in the patient-s axial, coronal and sagittal views were simulated based on a Fiducial Localisation Error (FLE) of 5 mm and (2) Simulation of the actual framework using projected views from a surface rendered CT model to represent white light images of the patient. Craniofacial features were employed as the registration basis to map the CT space onto the simulated intraoperative space. Photogrammetry experiments on an artificial skull were also performed to benchmark the results obtained from the second simulation. The results of both simulations show that the proposed protocol can provide a 5mm accuracy for these neurosurgical procedures.

Keywords: Image-guided Surgery, Multimodality Registration, Photogrammetry, Preoperative to Intraoperative Registration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503
12536 An Edge Detection and Filtering Mechanism of Two Dimensional Digital Objects Based on Fuzzy Inference

Authors: Ayman A. Aly, Abdallah A. Alshnnaway

Abstract:

The general idea behind the filter is to average a pixel using other pixel values from its neighborhood, but simultaneously to take care of important image structures such as edges. The main concern of the proposed filter is to distinguish between any variations of the captured digital image due to noise and due to image structure. The edges give the image the appearance depth and sharpness. A loss of edges makes the image appear blurred or unfocused. However, noise smoothing and edge enhancement are traditionally conflicting tasks. Since most noise filtering behaves like a low pass filter, the blurring of edges and loss of detail seems a natural consequence. Techniques to remedy this inherent conflict often encompass generation of new noise due to enhancement. In this work a new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of three stages. (1) Define fuzzy sets in the input space to computes a fuzzy derivative for eight different directions (2) construct a set of IFTHEN rules by to perform fuzzy smoothing according to contributions of neighboring pixel values and (3) define fuzzy sets in the output space to get the filtered and edged image. Experimental results are obtained to show the feasibility of the proposed approach with two dimensional objects.

Keywords: Additive noise, edge preserving filtering, fuzzy image filtering, noise reduction, two dimensional mechanical images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546
12535 The Study of Cost Accounting in S Company Based On TDABC

Authors: Heng Ma

Abstract:

Third-party warehousing logistics has an important role in the development of external logistics. At present, the third-party logistics in our country is still a new industry, the accounting system has not yet been established, the current financial accounting system of third-party warehousing logistics is mainly in the traditional way of thinking, and only able to provide the total cost information of the entire enterprise during the accounting period, unable to reflect operating indirect cost information. In order to solve the problem of third-party logistics industry cost information distortion, improve the level of logistics cost management, the paper combines theoretical research and case analysis method to reflect cost allocation by building third-party logistics costing model using Time-Driven Activity-Based Costing(TDABC), and takes S company as an example to account and control the warehousing logistics cost.Based on the idea of “Products consume activities and activities consume resources”, TDABC put time into the main cost driver and use time-consuming equation resources assigned to cost objects. In S company, the objects focuses on three warehouse, engaged with warehousing and transportation (the second warehouse, transport point) service. These three warehouse respectively including five departments, Business Unit, Production Unit, Settlement Center, Security Department and Equipment Division, the activities in these departments are classified by in-out of storage forecast, in-out of storage or transit and safekeeping work. By computing capacity cost rate, building the time-consuming equation, the paper calculates the final operation cost so as to reveal the real cost.The numerical analysis results show that the TDABC can accurately reflect the cost allocation of service customers and reveal the spare capacity cost of resource center, verifies the feasibility and validity of TDABC in third-party logistics industry cost accounting. It inspires enterprises focus on customer relationship management and reduces idle cost to strengthen the cost management of third-party logistics enterprises.

Keywords: Third-party logistics enterprises, TDABC, cost management, S company.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2400
12534 Minimization Entropic Applied to Rotary Dryers to Reduce the Energy Consumption

Authors: I. O. Nascimento, J. T. Manzi

Abstract:

The drying process is an important operation in the chemical industry and it is widely used in the food, grain industry and fertilizer industry. However, for demanding a considerable consumption of energy, such a process requires a deep energetic analysis in order to reduce operating costs. This paper deals with thermodynamic optimization applied to rotary dryers based on the entropy production minimization, aiming at to reduce the energy consumption. To do this, the mass, energy and entropy balance was used for developing a relationship that represents the rate of entropy production. The use of the Second Law of Thermodynamics is essential because it takes into account constraints of nature. Since the entropy production rate is minimized, optimals conditions of operations can be established and the process can obtain a substantial gain in energy saving. The minimization strategy had been led using classical methods such as Lagrange multipliers and implemented in the MATLAB platform. As expected, the preliminary results reveal a significant energy saving by the application of the optimal parameters found by the procedure of the entropy minimization It is important to say that this method has shown easy implementation and low cost.

Keywords: Drying, entropy minimization, modeling dryers, thermodynamic optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
12533 3D Frictionless Contact Case between the Structure of E-Bike and the Ground

Authors: Lele Zhang, HuiLeng Choo, Alexander Konyukhov, Shuguang Li

Abstract:

China is currently the world's largest producer and distributor of electric bicycle (e-bike). The increasing number of e-bikes on the road is accompanied by rising injuries and even deaths of e-bike drivers. Therefore, there is a growing need to improve the safety structure of e-bikes. This 3D frictionless contact analysis is a preliminary, but necessary work for further structural design improvement of an e-bike. The contact analysis between e-bike and the ground was carried out as follows: firstly, the Penalty method was illustrated and derived from the simplest spring-mass system. This is one of the most common methods to satisfy the frictionless contact case; secondly, ANSYS static analysis was carried out to verify finite element (FE) models with contact pair (without friction) between e-bike and the ground; finally, ANSYS transient analysis was used to obtain the data of the penetration p(u) of e-bike with respect to the ground. Results obtained from the simulation are as estimated by comparing with that from theoretical method. In the future, protective shell will be designed following the stability criteria and added to the frame of e-bike. Simulation of side falling of the improvedsafety structure of e-bike will be confirmed with experimental data.

Keywords: Frictionless contact, penalty method, e-bike, finite element.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2079