Search results for: codes result.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3330

Search results for: codes result.

750 Evaluation of Numerical Modeling of Jet Grouting Design Using in situ Loading Test

Authors: Reza Ziaie Moayed, Ehsan Azini

Abstract:

Jet grouting (JG) is one of the methods of improving and increasing the strength and bearing of soil in which the high pressure water or grout is injected through the nozzles into the soil. During this process, a part of the soil and grout particles comes out of the drill borehole, and the other part is mixed up with the grout in place, as a result of this process, a mass of modified soil is created. The purpose of this method is to change the soil into a mixture of soil and cement, commonly known as "soil-cement". In this paper, first, the principles of high pressure injection and then the effective parameters in the JG method are described. Then, the tests on the samples taken from the columns formed from the excavation around the soil-cement columns, as well as the static loading test on the created column, are discussed. In the other part of this paper, the soil behavior models for numerical modeling in PLAXIS software are mentioned. The purpose of this paper is to evaluate the results of numerical modeling based on in-situ static loading tests. The results indicate an acceptable agreement between the results of the tests mentioned and the modeling results. Also, modeling with this software as an appropriate option for technical feasibility can be used to soil improvement using JG.

Keywords: Jet grouting column, Soil improvement, Numerical modeling, In-situ loading test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 993
749 Treatment or Re-Victimizing the Victims

Authors: Juliana Panova

Abstract:

Severe symptoms, such as dissociation, depersonalization, self-mutilation, suicidal ideations and gestures, are the main reasons for a person to be diagnosed with Borderline Personality Disorder (BPD) and admitted to an inpatient Psychiatric Hospital. However, these symptoms are also indicators of a severe traumatic history as indicated by the extensive research on the topic. Unfortunately patients with such clinical presentation often are treated repeatedly only for their symptomatic behavior, while the main cause for their suffering, the trauma itself, is usually left unaddressed therapeutically. All of the highly structured, replicable, and manualized treatments lack the recognition of the uniqueness of the person and fail to respect his/her rights to experience and react in an idiosyncratic manner. Thus the communicative and adaptive meaning of such symptomatic behavior is missed. Only its pathological side is recognized and subjected to correction and stigmatization, and the message that the person is damaged goods that needs fixing is conveyed once again. However, this time the message would be even more convincing for the victim, because it is sent by mental health providers, who have the credibility to make such a judgment. The result is a revolving door of very expensive hospitalizations for only a temporary and patchy fix. In this way the patients, once victims of abuse and hardship are left invalidated and thus their re-victimization is perpetuated in their search for understanding and help. Keywordsborderline personality disorder (BPD), complex PTSD, integrative treatment of trauma, re-victimization of trauma victims.

Keywords: borderline personality disorder (BPD), complex PTSD, integrative treatment of trauma, re-victimization of trauma victims.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
748 Exergetic and Life Cycle Assessment Analyses of Integrated Biowaste Gasification-Combustion System: A Study Case

Authors: Anabel Fernandez, Leandro Rodriguez-Ortiz, Rosa Rodríguez

Abstract:

Due to the negative impact of fossil fuels, renewable energies are promising sources to limit global temperature rise and damage to the environment. Also, the development of technology is focused on obtaining energetic products from renewable sources. In this study, a thermodynamic model including exergy balance and a subsequent Life Cycle Assessment (LCA) were carried out for four subsystems of the integrated gasification-combustion of pinewood. Results of exergy analysis and LCA showed the process feasibility in terms of exergy efficiency and global energy efficiency of the life cycle (GEELC). Moreover, the energy return on investment (EROI) index was calculated. The global exergy efficiency resulted in 67%. For pretreatment, reaction, cleaning, and electric generation subsystems, the results were 85%, 59%, 87%, and 29%, respectively. Results of LCA indicated that the emissions from the electric generation caused the most damage to the atmosphere, water, and soil. GEELC resulted in 31.09% for the global process. This result suggested the environmental feasibility of an integrated gasification-combustion system. EROI resulted in 3.15, which determines the sustainability of the process.

Keywords: Exergy analysis, Life Cycle Assessment, LCA, renewability, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 422
747 TOSOM: A Topic-Oriented Self-Organizing Map for Text Organization

Authors: Hsin-Chang Yang, Chung-Hong Lee, Kuo-Lung Ke

Abstract:

The self-organizing map (SOM) model is a well-known neural network model with wide spread of applications. The main characteristics of SOM are two-fold, namely dimension reduction and topology preservation. Using SOM, a high-dimensional data space will be mapped to some low-dimensional space. Meanwhile, the topological relations among data will be preserved. With such characteristics, the SOM was usually applied on data clustering and visualization tasks. However, the SOM has main disadvantage of the need to know the number and structure of neurons prior to training, which are difficult to be determined. Several schemes have been proposed to tackle such deficiency. Examples are growing/expandable SOM, hierarchical SOM, and growing hierarchical SOM. These schemes could dynamically expand the map, even generate hierarchical maps, during training. Encouraging results were reported. Basically, these schemes adapt the size and structure of the map according to the distribution of training data. That is, they are data-driven or dataoriented SOM schemes. In this work, a topic-oriented SOM scheme which is suitable for document clustering and organization will be developed. The proposed SOM will automatically adapt the number as well as the structure of the map according to identified topics. Unlike other data-oriented SOMs, our approach expands the map and generates the hierarchies both according to the topics and their characteristics of the neurons. The preliminary experiments give promising result and demonstrate the plausibility of the method.

Keywords: Self-organizing map, topic identification, learning algorithm, text clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008
746 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution

Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang

Abstract:

Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.

Keywords: Parallel Compressor Model (PCM), Revised Calculation Method, Inlet Distortion, Outlet Unequal Pressure Distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1664
745 Two-Stage Launch Vehicle Trajectory Modeling for Low Earth Orbit Applications

Authors: Assem M. F. Sallam, Ah. El-S. Makled

Abstract:

This paper presents a study on the trajectory of a two stage launch vehicle. The study includes dynamic responses of motion parameters as well as the variation of angles affecting the orientation of the launch vehicle (LV). LV dynamic characteristics including state vector variation with corresponding altitude and velocity for the different LV stages separation, as well as the angle of attack and flight path angles are also discussed. A flight trajectory study for the drop zone of first stage and the jettisoning of fairing are introduced in the mathematical modeling to study their effect. To increase the accuracy of the LV model, atmospheric model is used taking into consideration geographical location and the values of solar flux related to the date and time of launch, accurate atmospheric model leads to enhancement of the calculation of Mach number, which affects the drag force over the LV. The mathematical model is implemented on MATLAB based software (Simulink). The real available experimental data are compared with results obtained from the theoretical computation model. The comparison shows good agreement, which proves the validity of the developed simulation model; the maximum error noticed was generally less than 10%, which is a result that can lead to future works and enhancement to decrease this level of error.

Keywords: Launch vehicle modeling, launch vehicle trajectory, mathematical modeling, MATLAB-Simulink.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3253
744 Integrated Waste-to-Energy Approach: An Overview

Authors: Tsietsi J. Pilusa, Tumisang G. Seodigeng

Abstract:

This study evaluates the benefits of advanced waste management practices in unlocking waste-to-energy opportunities within the solid waste industry. The key drivers of sustainable waste management practices, specifically with respect to packaging waste-to-energy technology options are discussed. The success of a waste-to-energy system depends significantly on the appropriateness of available technologies, including those that are well established as well as those that are less so. There are hard and soft interventions to be considered when packaging an integrated waste treatment solution. Technology compatibility with variation in feedstock (waste) quality and quantities remains a key factor. These factors influence the technology reliability in terms of production efficiencies and product consistency, which in turn, drives the supply and demand network. Waste treatment technologies rely on the waste material as feedstock; the feedstock varies in quality and quantities depending on several factors; hence, the technology fails, as a result. It is critical to design an advanced waste treatment technology in an integrated approach to minimize the possibility of technology failure due to unpredictable feedstock quality, quantities, conversion efficiencies, and inconsistent product yield or quality. An integrated waste-to-energy approach offers a secure system design that considers sustainable waste management practices.

Keywords: Emerging markets, evaluation tool, interventions, waste treatment technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 983
743 Effect of High-Heeled Shoes on Gait: A Micro-Electro-Mechanical-Systems Based Approach

Authors: Harun Sumbul, Orhan Ozyurt

Abstract:

The accelerations generated by the shoes in the body should be known in order to prevent balance problems, degradation of body shape and to spend less energy. In this study, it is aimed to investigate the effects of the shoe heel height on the human body. The working group has been created as five women (range 27-32 years) with different characteristics and five shoes with different heel heights (1, 3.5, 5, 7 and 9 cm). Individuals in the study group wore shoes and walked along a 20-meter racecourse. The accelerations created by the shoes are measured in three axes (30.270 accelerometric data) and analyzed. Results show us that; while walking with high-heeled shoes, the foot is lifted more; in this case, more effort has been spent. So, more weight has occurred at ankles and joints. Since high-heeled shoes cause greater acceleration, women wearing high-heeled shoes tend to pay more attention when taking a step. As a result, for foot and body health, shoe heel must be designed to absorb the reaction from the ground. High heels disrupt the structure of the foot and it is damaging the body shape. In this respect, this study is considered to be a remarkable method to find of effect of high-heeled shoes on gait by using accelerometer in the literature.

Keywords: Acceleration, sensor, gait analysis, high shoe heel, micro-electro-mechanical-systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
742 Integrating Artificial Neural Network and Taguchi Method on Constructing the Real Estate Appraisal Model

Authors: Mu-Yen Chen, Min-Hsuan Fan, Chia-Chen Chen, Siang-Yu Jhong

Abstract:

In recent years, real estate prediction or valuation has been a topic of discussion in many developed countries. Improper hype created by investors leads to fluctuating prices of real estate, affecting many consumers to purchase their own homes. Therefore, scholars from various countries have conducted research in real estate valuation and prediction. With the back-propagation neural network that has been popular in recent years and the orthogonal array in the Taguchi method, this study aimed to find the optimal parameter combination at different levels of orthogonal array after the system presented different parameter combinations, so that the artificial neural network obtained the most accurate results. The experimental results also demonstrated that the method presented in the study had a better result than traditional machine learning. Finally, it also showed that the model proposed in this study had the optimal predictive effect, and could significantly reduce the cost of time in simulation operation. The best predictive results could be found with a fewer number of experiments more efficiently. Thus users could predict a real estate transaction price that is not far from the current actual prices.

Keywords: Artificial Neural Network, Taguchi Method, Real Estate Valuation Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3047
741 Design of Direct Power Controller for a High Power Neutral Point Clamped Converter Using Real Time Simulator

Authors: Amin Zabihinejad, Philippe Viarouge

Abstract:

In this paper, a direct power control (DPC) strategies have been investigated in order to control a high power AC/DC converter with time variable load. This converter is composed of a three level three phase neutral point clamped (NPC) converter as rectifier and an H-bridge four quadrant current control converter. In the high power application, controller not only must adjust the desire outputs but also decrease the level of distortions which are injected to the network from the converter. Regarding to this reason and nonlinearity of the power electronic converter, the conventional controllers cannot achieve appropriate responses. In this research, the precise mathematical analysis has been employed to design the appropriate controller in order to control the time variable load. A DPC controller has been proposed and simulated using Matlab/ Simulink. In order to verify the simulation result, a real time simulator- OPAL-RT- has been employed. In this paper, the dynamic response and stability of the high power NPC with variable load has been investigated and compared with conventional types using a real time simulator. The results proved that the DPC controller is more stable and has more precise outputs in comparison with conventional controller.

Keywords: Direct Power Control, Three Level Rectifier, Real Time Simulator, High Power Application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1955
740 Optimal Manufacturing Scheduling for Dependent Details Processing

Authors: Ivan C. Mustakerov, Daniela I. Borissova

Abstract:

The increasing competitiveness in manufacturing industry is forcing manufacturers to seek effective processing schedules. The paper presents an optimization manufacture scheduling approach for dependent details processing with given processing sequences and times on multiple machines. By defining decision variables as start and end moments of details processing it is possible to use straightforward variables restrictions to satisfy different technological requirements and to formulate easy to understand and solve optimization tasks for multiple numbers of details and machines. A case study example is solved for seven base moldings for CNC metalworking machines processed on five different machines with given processing order among details and machines and known processing time-s duration. As a result of linear optimization task solution the optimal manufacturing schedule minimizing the overall processing time is obtained. The manufacturing schedule defines the moments of moldings delivery thus minimizing storage costs and provides mounting due-time satisfaction. The proposed optimization approach is based on real manufacturing plant problem. Different processing schedules variants for different technological restrictions were defined and implemented in the practice of Bulgarian company RAIS Ltd. The proposed approach could be generalized for other job shop scheduling problems for different applications.

Keywords: Optimal manufacturing scheduling, linear programming, metalworking machines production, dependant details processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464
739 Towards an Enhanced Stochastic Simulation Model for Risk Analysis in Highway Construction

Authors: Anshu Manik, William G. Buttlar, Kasthurirangan Gopalakrishnan

Abstract:

Over the years, there is a growing trend towards quality-based specifications in highway construction. In many Quality Control/Quality Assurance (QC/QA) specifications, the contractor is primarily responsible for quality control of the process, whereas the highway agency is responsible for testing the acceptance of the product. A cooperative investigation was conducted in Illinois over several years to develop a prototype End-Result Specification (ERS) for asphalt pavement construction. The final characteristics of the product are stipulated in the ERS and the contractor is given considerable freedom in achieving those characteristics. The risk for the contractor or agency depends on how the acceptance limits and processes are specified. Stochastic simulation models are very useful in estimating and analyzing payment risk in ERS systems and these form an integral part of the Illinois-s prototype ERS system. This paper describes the development of an innovative methodology to estimate the variability components in in-situ density, air voids and asphalt content data from ERS projects. The information gained from this would be crucial in simulating these ERS projects for estimation and analysis of payment risks associated with asphalt pavement construction. However, these methods require at least two parties to conduct tests on all the split samples obtained according to the sampling scheme prescribed in present ERS implemented in Illinois.

Keywords: Asphalt Pavement, Risk Analysis, StochasticSimulation, QC/QA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
738 Jamun Juice Extraction Using Commercial Enzymes and Optimization of the Treatment with the Help of Physicochemical, Nutritional and Sensory Properties

Authors: Payel Ghosh, Rama Chandra Pradhan, Sabyasachi Mishra

Abstract:

Jamun (Syzygium cuminii L.) is one of the important indigenous minor fruit with high medicinal value. The jamun cultivation is unorganized and there is huge loss of this fruit every year. The perishable nature of the fruit makes its postharvest management further difficult. Due to the strong cell wall structure of pectin-protein bonds and hard seeds, extraction of juice becomes difficult. Enzymatic treatment has been commercially used for improvement of juice quality with high yield. The objective of the study was to optimize the best treatment method for juice extraction. Enzymes (Pectinase and Tannase) from different stains had been used and for each enzyme, best result obtained by using response surface methodology. Optimization had been done on the basis of physicochemical property, nutritional property, sensory quality and cost estimation. According to quality aspect, cost analysis and sensory evaluation, the optimizing enzymatic treatment was obtained by Pectinase from Aspergillus aculeatus strain. The optimum condition for the treatment was 44 oC with 80 minute with a concentration of 0.05% (w/w). At these conditions, 75% of yield with turbidity of 32.21NTU, clarity of 74.39%T, polyphenol content of 115.31 mg GAE/g, protein content of 102.43 mg/g have been obtained with a significant difference in overall acceptability.

Keywords: Jamun, enzymatic treatment, physicochemical property, sensory analysis, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
737 Load Discontinuity in Shock Response and Its Remedies

Authors: Shuenn-Yih Chang, Chiu-Li Huang

Abstract:

It has been shown that a load discontinuity at the end of an impulse will result in an extra impulse and hence an extra amplitude distortion if a step-by-step integration method is employed to yield the shock response. In order to overcome this difficulty, three remedies are proposed to reduce the extra amplitude distortion. The first remedy is to solve the momentum equation of motion instead of the force equation of motion in the step-by-step solution of the shock response, where an external momentum is used in the solution of the momentum equation of motion. Since the external momentum is a resultant of the time integration of external force, the problem of load discontinuity will automatically disappear. The second remedy is to perform a single small time step immediately upon termination of the applied impulse while the other time steps can still be conducted by using the time step determined from general considerations. This is because that the extra impulse caused by a load discontinuity at the end of an impulse is almost linearly proportional to the step size. Finally, the third remedy is to use the average value of the two different values at the integration point of the load discontinuity to replace the use of one of them for loading input. The basic motivation of this remedy originates from the concept of no loading input error associated with the integration point of load discontinuity. The feasibility of the three remedies are analytically explained and numerically illustrated.

Keywords: Dynamic analysis, load discontinuity, shock response, step-by-step integration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1313
736 Performance Evaluation of Neural Network Prediction for Data Prefetching in Embedded Applications

Authors: Sofien Chtourou, Mohamed Chtourou, Omar Hammami

Abstract:

Embedded systems need to respect stringent real time constraints. Various hardware components included in such systems such as cache memories exhibit variability and therefore affect execution time. Indeed, a cache memory access from an embedded microprocessor might result in a cache hit where the data is available or a cache miss and the data need to be fetched with an additional delay from an external memory. It is therefore highly desirable to predict future memory accesses during execution in order to appropriately prefetch data without incurring delays. In this paper, we evaluate the potential of several artificial neural networks for the prediction of instruction memory addresses. Neural network have the potential to tackle the nonlinear behavior observed in memory accesses during program execution and their demonstrated numerous hardware implementation emphasize this choice over traditional forecasting techniques for their inclusion in embedded systems. However, embedded applications execute millions of instructions and therefore millions of addresses to be predicted. This very challenging problem of neural network based prediction of large time series is approached in this paper by evaluating various neural network architectures based on the recurrent neural network paradigm with pre-processing based on the Self Organizing Map (SOM) classification technique.

Keywords: Address, data set, memory, prediction, recurrentneural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
735 Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material

Authors: Natalia Podkhodova, Olga Sheremeteva, Mariia Soldaeva

Abstract:

Creating favourable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. This paper describes a holistic approach to teaching mathematics designed to address the primary challenges of such teaching; specifically, the challenge of students’ comprehension. Essentially, this approach consists of (1) establishing links between the attributes of the notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience—value-based emotions, contextual, procedural and communicative—during the educational process; (3) linking together different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modelling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and techniques influence understanding of material used in teaching mathematics was the primary goal. The study included an experiment in which 256 secondary school students took part: 142 in the study group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics — “Derivative” and “Trigonometric functions”—was evaluated. Control group participants were taught using traditional methods. Students in the study group were taught using the holistic method: under teacher’s guidance, they carried out assignments designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as assignments that required the ability to operate with all modes of presentation. Identification, accounting for and transformation of subjective experience were associated with methods of stimulating the emotional value component of the studied mathematical content (discussions of lesson titles, assignments aimed to create study dominants, performing theme-related physical exercise ...) The use of techniques that forms inter-subject notions based on linkages between, perceptual real and mathematical conceptual spaces proved to be of special interest to the students. Results of the experiment were analysed by presenting students in each of the groups with a final test in each of the studied topics. The test included assignments that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion x2 was used to reveal statistics significance of results (pass-fail the modelling test). Significant difference of results was revealed (p < 0.001), which allowed to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. The total number of completed assignments of each student was analysed as well, with average results calculated for each group. Statistical significance of result differences against the quantitative criterion (number of completed assignments) was determined using Student’s t-test, which showed that students in the study group completed significantly more assignments than those in the control group (p = 0.0001). Authors thus come to the conclusion that suggested increase in the level of comprehension of study material took place as a result of applying implemented methods and techniques.

Keywords: Comprehension of mathematical content, holistic approach to teaching mathematics in secondary school, subjective experience, technology of the formation of inter-subject notions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 544
734 Study on Hysteresis in Sustainable Two-Layer Circular Tube under a Lateral Compression Load

Authors: Ami Nomura, Ken Imanishi, Yukinori Taniguchi, Etsuko Ueda, Tadahiro Wada, Shinichi Enoki

Abstract:

Recently, there have been a lot of earthquakes in Japan. It is necessary to promote seismic isolation devices for buildings. The devices have been hardly diffused in attached houses, because the devices are very expensive. We should develop a low-cost seismic isolation device for detached houses. We suggested a new seismic isolation device which uses a two-layer circular tube as a unit. If hysteresis is produced in the two-layer circular tube under lateral compression load, we think that the two-layer circular tube can have energy absorbing capacity. It is necessary to contact the outer layer and the inner layer to produce hysteresis. We have previously reported how the inner layer comes in contact with the outer layer from a perspective of analysis used mechanics of materials. We have clarified that the inner layer comes in contact with the outer layer under a lateral compression load. In this paper, we explored contact area between the outer layer and the inner layer under a lateral compression load by using FEA. We think that changing the inner layer’s thickness is effective in increase the contact area. In order to change the inner layer’s thickness, we changed the shape of the inner layer. As a result, the contact area changes depending on the inner layer’s thickness. Additionally, we experimented to check whether hysteresis occurs in fact. As a consequence, we can reveal hysteresis in the two-layer circular tube under the condition.

Keywords: Contact area, energy absorbing capacity, hysteresis, seismic isolation device.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1975
733 Methane Production from Biomedical Waste (Blood)

Authors: Fatima M. Kabbashi, Abdalla M. Abdalla, Hussam K. Hamad, Elias S. Hassan

Abstract:

This study investigates the production of renewable energy (biogas) from biomedical hazard waste (blood) and eco-friendly disposal. Biogas is produced by the bacterial anaerobic digestion of biomaterial (blood). During digestion process bacterial feeding result in breaking down chemical bonds of the biomaterial and changing its features, by the end of the digestion (biogas production) the remains become manure as known. That has led to the economic and eco-friendly disposal of hazard biomedical waste (blood). The samples (Whole blood, Red blood cells 'RBCs', Blood platelet and Fresh Frozen Plasma ‘FFP’) are collected and measured in terms of carbon to nitrogen C/N ratio and total solid, then filled in connected flasks (three flasks) using water displacement method. The results of trails showed that the platelet and FFP failed to produce flammable gas, but via a gas analyzer, it showed the presence of the following gases: CO, HC, CO₂, and NOX. Otherwise, the blood and RBCs produced flammable gases: Methane-nitrous CH₃NO (99.45%), which has a blue color flame and carbon dioxide CO₂ (0.55%), which has red/yellow color flame. Methane-nitrous is sometimes used as fuel for rockets, some aircraft and racing cars.

Keywords: Renewable energy, biogas, biomedical waste, blood, anaerobic digestion, eco-friendly disposal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1183
732 Texture Feature Extraction of Infrared River Ice Images using Second-Order Spatial Statistics

Authors: Bharathi P. T, P. Subashini

Abstract:

Ice cover County has a significant impact on rivers as it affects with the ice melting capacity which results in flooding, restrict navigation, modify the ecosystem and microclimate. River ices are made up of different ice types with varying ice thickness, so surveillance of river ice plays an important role. River ice types are captured using infrared imaging camera which captures the images even during the night times. In this paper the river ice infrared texture images are analysed using first-order statistical methods and secondorder statistical methods. The second order statistical methods considered are spatial gray level dependence method, gray level run length method and gray level difference method. The performance of the feature extraction methods are evaluated by using Probabilistic Neural Network classifier and it is found that the first-order statistical method and second-order statistical method yields low accuracy. So the features extracted from the first-order statistical method and second-order statistical method are combined and it is observed that the result of these combined features (First order statistical method + gray level run length method) provides higher accuracy when compared with the features from the first-order statistical method and second-order statistical method alone.

Keywords: Gray Level Difference Method, Gray Level Run Length Method, Kurtosis, Probabilistic Neural Network, Skewness, Spatial Gray Level Dependence Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2878
731 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework

Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim

Abstract:

Background modeling and subtraction in video analysis has been widely used as an effective method for moving objects detection in many computer vision applications. Recently, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are the most frequently occurred problems in the practical situation. This paper presents a favorable two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean value of each RGB color channel. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the output of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate very competitive performance compared to previous models.

Keywords: Background subtraction, codebook model, local binary pattern, dynamic background, illumination changes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1944
730 Impact on Course Registration and SGPA of the Students of BSc in EEE Programme due to Online Teaching during the COVID-19 Pandemic

Authors: Muhibul Haque Bhuyan

Abstract:

Most educational institutions were compelled to switch over to the online mode of teaching, learning, and assessment due to the lockdown when the corona pandemic started around the globe in the early part of the year 2020. However, they faced a unique set of challenges in delivering knowledge and skills to their students as well as formulating a proper assessment policy. This paper investigates whether there is an impact on the student Semester Grade Point Average (SGPA) due to the online mode of teaching and learning assessment at the Department of Electrical and Electronic Engineering (EEE) of Southeast University (SEU). Details of student assessments are discussed. Then students’ grades were analyzed to find out the impact on SGPA based on the z-test by finding the standard deviation (). It also pointed out the challenges associated with the online classes and assessment strategies to be adopted during the online assessment. The student admission, course advising, and registration statistics were also presented in several tables and analyzed based on the change in percentage to observe the impact on it due to the pandemic. In summary, it was observed that the students’ SGPAs are not affected but student course advising and registration were affected slightly by the pandemic. Finally, the paper provides some recommendations to improve the online teaching, learning, assessment, and evaluation system.

Keywords: electrical and electronic engineering students, impact on course grading and SGPA, online assessment, online teaching, student registration, semester result

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 369
729 Effects of Grape Seed Oil on Postharvest Life and Quality of Some Grape Cultivars

Authors: Zeki Kara, Kevser Yazar

Abstract:

Table grapes (Vitis vinifera L.) are an important crop worldwide. Postharvest problems like berry shattering, decay and stem dehydration are some of the important factors that limit the marketing of table grapes. Edible coatings are an alternative for increasing shelf-life of fruits, protecting fruits from humidity and oxygen effects, thus retarding their deterioration. This study aimed to compare different grape seed oil applications (GSO, 0.5 g L-1, 1 g L-1, 2 g L-1) and SO2 generating pads effects (SO2-1, SO2-2). Treated grapes with GSO and generating pads were packaged into polyethylene trays and stored at 0 ± 1°C and 85-95% moisture. Effects of the applications were investigated by some quality and sensory evaluations with intervals of 15 days. SO2 applications were determined the most effective treatments for minimizing weight loss and changes in TA, pH, color and appearance value. Grape seed oil applications were determined as a good alternative for grape preservation, improving weight losses and °Brix, TA, the color values and sensory analysis. Commercially, ‘Alphonse Lavallée’ clusters were stored for 75 days and ‘Antep Karası’ clusters for 60 days. The data obtained from GSO indicated that it had a similar quality result to SO2 for up to 40 days storage.

Keywords: Postharvest, quality, sensory analyses, Vitis vinifera L.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 817
728 Influence of Place Identity on Walkability: A Comparative Study between Two Mixed Used Streets Chaharbagh St. Isfahan, Iran and Dereboyu St. Lefkosa, North Cyprus

Authors: R. Rafiemanzelat

Abstract:

One of the most recent fields of investigation in urban issues focuses on the walkability in urban spaces. Considering the importance of walkability apart from pedestrian transportation, increasing walkability will help to reduce the congestion and environmental impact. This subject also matters as it has a social life, experiential quality and economical sustainability value. This study focused on the effects of walkability and place identity on each other in urban public spaces, streets in particular, as a major indicator of their success. The theoretical aspects which examine for this purpose consist of two parts: The first will evaluate the essential components of place identity in the streets and the second one will discuss the concept of walkability and its development theories which have been derived from walkable spaces. Finally, research investigates place identity and walkability and their determinants in two major streets in different cities. The streets are Chaharbagh Street in Isfahan/Iran and Dereboyu Street in Lefkosa/North Cyprus. This study has a qualitative approach with the research method of walkability studies. The qualitative method is combined with the collection of data relating to walking behavior and place identity through an observational field study. The result will show a relationship between pedestrian-friendly spaces and identity by related variables which has obtained.

Keywords: Place identity, walkability, urban public space, streets, pedestrian-friendly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 955
727 Antioxidant Biosensor Using Microbe

Authors: Dyah Iswantini, Trivadila, Novik Nurhidayat, Waras Nurcholis

Abstract:

The antioxidant compounds are needed for the food, beverages, and pharmaceuticals industry. For this purpose, an appropriate method is required to measure the antioxidant properties in various types of samples. Spectrophotometric method usually used has some weaknesses, including the high price, long sample preparation time, and less sensitivity. Among the alternative methods developed to overcome these weaknesses is antioxidant biosensor based on superoxide dismutase (SOD) enzyme. Therefore, this study was carried out to measure the SOD activity originating from Deinococcus radiodurans and to determine its kinetics properties. Carbon paste electrode modified with ferrocene and immobilized SOD exhibited anode and cathode current peak at potential of +400 and +300mv respectively, in both pure SOD and SOD of D. radiodurans. This indicated that the current generated was from superoxide catalytic dismutation reaction by SOD. Optimum conditions for SOD activity was at pH 9 and temperature of 27.50C for D. radiodurans SOD, and pH 11 and temperature of 200C for pure SOD. Dismutation reaction kinetics of superoxide catalyzed by SOD followed the Lineweaver-Burk kinetics with D. radiodurans SOD KMapp value was smaller than pure SOD. The result showed that D. radiodurans SOD had higher enzyme-substrate affinity and specificity than pure SOD. It concluded that D. radiodurans SOD had a great potential as biological recognition component for antioxidant biosensor.

Keywords: Antioxidant biosensor, Deinococcus radiodurans, enzyme kinetic, superoxide dismutase (SOD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2099
726 Quality Function Deployment Application in Sewer Pipeline Assessment

Authors: Khalid Kaddoura, Tarek Zayed

Abstract:

Infrastructure assets are essential in urban cities; their purpose is to facilitate the public needs. As a result, their conditions and states shall always be monitored to avoid any sudden malfunction. Sewer systems, one of the assets, are an essential part of the underground infrastructure as they transfer sewer medium to designated areas. However, their conditions are subject to deterioration due to ageing. Therefore, it is of great significance to assess the conditions of pipelines to avoid sudden collapses. Current practices of sewer pipeline assessment rely on industrial protocols that consider distinct defects and grades to conclude the limited average or peak score of the assessed assets. This research aims to enhance the evaluation by integrating the Quality Function Deployment (QFD) and the Decision-Making Trial and Evaluation Laboratory (DEMATEL) methods in assessing the condition of sewer pipelines. The methodology shall study the cause and effect relationship of the systems’ defects to deduce the relative influence weights of each defect. Subsequently, the overall grade is calculated by aggregating the WHAT’s and HOW’s of the House of Quality (HOQ) using the computed relative weights. Thus, this study shall enhance the evaluation of the assets to conclude informative rehabilitation and maintenance plans for decision makers.

Keywords: Condition assessment, DEMATEL, QFD, sewer pipelines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806
725 Thermal Analysis on Heat Transfer Enhancement and Fluid Flow for Al2O3 Water-Ethylene Glycol Nanofluid in Single PEMFC Mini Channel

Authors: Irnie Zakaria, W. A. N. W Mohamed, W. H. Azmi

Abstract:

Thermal enhancement of a single mini channel in Proton Exchange Membrane Fuel Cell (PEMFC) cooling plate is numerically investigated. In this study, low concentration of Al2O3 in Water - Ethylene Glycol mixtures is used as coolant in single channel of carbon graphite plate to mimic the mini channels in PEMFC cooling plate. A steady and incompressible flow with constant heat flux is assumed in the channel of 1mm x 5mm x 100mm. Nano particle of Al2O3 used ranges from 0.1, 0.3 and 0.5 vol % concentration and then dispersed in 60:40 (water: Ethylene Glycol) mixture. The effect of different flow rates to fluid flow and heat transfer enhancement in Re number range of 20 to 140 was observed. The result showed that heat transfer coefficient was improved by 18.11%, 9.86% and 5.37% for 0.5, 0.3 and 0.1 vol. % Al2O3 in 60:40 (water: EG) as compared to base fluid of 60:40 (water: EG). It is also showed that the higher vol. % concentration of Al2O3 performed better in term of thermal enhancement but at the expense of higher pumping power required due to increase in pressure drop experienced. Maximum additional pumping power of 0.0012W was required for 0.5 vol % Al2O3 in 60:40 (water: EG) at Re number 140.

Keywords: Heat transfer, mini channel, nanofluid, PEMFC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2111
724 Value Analysis Dashboard in Supply Chain Management: Real Case Study from Iran

Authors: Seyedehfatemeh Golrizgashti, Seyedali Dalil

Abstract:

The goal of this paper is proposing a supply chain value dashboard in home appliance manufacturing firms to create more value for all stakeholders via balanced scorecard approach. Balanced scorecard is an effective approach that managers have used to evaluate supply chain performance in many fields but there is a lack of enough attention to all supply chain stakeholders, improving value creation and, defining correlation between value indicators and performance measuring quantitatively. In this research the key stakeholders in home appliance supply chain, value indicators with respect to create more value for stakeholders and the most important metrics to evaluate supply chain value performance based on balanced scorecard approach have been selected via literature review. The most important indicators based on expert’s judgment acquired by in survey focused on creating more value for. Structural equation modelling has been used to disclose relations between value indicators and balanced scorecard metrics. The important result of this research is identifying effective value dashboard to create more value for all stakeholders in supply chain via balanced scorecard approach and based on an empirical study covering ten home appliance manufacturing firms in Iran. Home appliance manufacturing firms can increase their stakeholder's satisfaction by using this value dashboard.

Keywords: Supply chain management, balanced scorecard, value, Structural modeling, Stakeholders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2059
723 Analysis of Secondary School Students’ Perceptions about Information Technologies through a Word Association Test

Authors: Fetah Eren, Ismail Sahin, Ismail Celik, Ahmet Oguz Akturk

Abstract:

The aim of this study is to discover secondary school students’ perceptions related to information technologies and the connections between concepts in their cognitive structures. A word association test consisting of six concepts related to information technologies is used to collect data from 244 secondary school students. Concept maps that present students’ cognitive structures are drawn with the help of frequency data. Data are analyzed and interpreted according to the connections obtained as a result of the concept maps. It is determined students associate most with these concepts—computer, Internet, and communication of the given concepts, and associate least with these concepts—computer-assisted education and information technologies. These results show the concepts, Internet, communication, and computer, are an important part of students’ cognitive structures. In addition, students mostly answer computer, phone, game, Internet and Facebook as the key concepts. These answers show students regard information technologies as a means for entertainment and free time activity, not as a means for education.

Keywords: Word association test, cognitive structure, information technology, secondary school.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2056
722 Democratic Political Culture of the 5th and 6th Graders under the Authority of Dusit District Office, Bangkok

Authors: Vilasinee Jintalikhitdee, Phusit Phukamchanoad, Sakapas Saengchai

Abstract:

This research aims to study the level of democratic political culture and the factors that affect the democratic political culture of 5th and 6th graders under the authority of Dusit District Office, Bangkok by using stratified sampling for probability sampling and using purposive sampling for non-probability sampling to collect data toward the distribution of questionnaires to 300 respondents. This covers all of the schools under the authority of Dusit District Office. The researcher analyzed the data by using descriptive statistics which include arithmetic mean, standard deviation, and inferential statistics which are Independent Samples T-test (T-test) and One-Way ANOVA (F-test). The researcher also collected data by interviewing the target groups, and then analyzed the data by the use of descriptive analysis. The result shows that 5th and 6th graders under the authority of Dusit District Office, Bangkok have exposed to democratic political culture at high level in overall. When considering each part, it found out that the part that has highest mean is “the constitutional democratic governmental system is suitable for Thailand” statement. The part with the lowest mean is “corruption (cheat and defraud) is normal in Thai society” statement. The factor that affects democratic political culture is grade levels, occupations of mothers, and attention in news and political movements.

Keywords: Democratic, Political Culture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
721 Comparison of Phylogenetic Trees of Multiple Protein Sequence Alignment Methods

Authors: Khaddouja Boujenfa, Nadia Essoussi, Mohamed Limam

Abstract:

Multiple sequence alignment is a fundamental part in many bioinformatics applications such as phylogenetic analysis. Many alignment methods have been proposed. Each method gives a different result for the same data set, and consequently generates a different phylogenetic tree. Hence, the chosen alignment method affects the resulting tree. However in the literature, there is no evaluation of multiple alignment methods based on the comparison of their phylogenetic trees. This work evaluates the following eight aligners: ClustalX, T-Coffee, SAGA, MUSCLE, MAFFT, DIALIGN, ProbCons and Align-m, based on their phylogenetic trees (test trees) produced on a given data set. The Neighbor-Joining method is used to estimate trees. Three criteria, namely, the dNNI, the dRF and the Id_Tree are established to test the ability of different alignment methods to produce closer test tree compared to the reference one (true tree). Results show that the method which produces the most accurate alignment gives the nearest test tree to the reference tree. MUSCLE outperforms all aligners with respect to the three criteria and for all datasets, performing particularly better when sequence identities are within 10-20%. It is followed by T-Coffee at lower sequence identity (<10%), Align-m at 20-30% identity, and ClustalX and ProbCons at 30-50% identity. Also, it is noticed that when sequence identities are higher (>30%), trees scores of all methods become similar.

Keywords: Multiple alignment methods, phylogenetic trees, Neighbor-Joining method, Robinson-Foulds distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809