Search results for: Scale space filter.
280 Statistical Modeling of Accelerated Pavement Failure Using Response Surface Methodology
Authors: Anshu Manik, Kasthurirangan Gopalakrishnan, Siddhartha K. Khaitan
Abstract:
Rutting is one of the major load-related distresses in airport flexible pavements. Rutting in paving materials develop gradually with an increasing number of load applications, usually appearing as longitudinal depressions in the wheel paths and it may be accompanied by small upheavals to the sides. Significant research has been conducted to determine the factors which affect rutting and how they can be controlled. Using the experimental design concepts, a series of tests can be conducted while varying levels of different parameters, which could be the cause for rutting in airport flexible pavements. If proper experimental design is done, the results obtained from these tests can give a better insight into the causes of rutting and the presence of interactions and synergisms among the system variables which have influence on rutting. Although traditionally, laboratory experiments are conducted in a controlled fashion to understand the statistical interaction of variables in such situations, this study is an attempt to identify the critical system variables influencing airport flexible pavement rut depth from a statistical DoE perspective using real field data from a full-scale test facility. The test results do strongly indicate that the response (rut depth) has too much noise in it and it would not allow determination of a good model. From a statistical DoE perspective, two major changes proposed for this experiment are: (1) actual replication of the tests is definitely required, (2) nuisance variables need to be identified and blocked properly. Further investigation is necessary to determine possible sources of noise in the experiment.
Keywords: Airport Pavement, Design of Experiments, Rutting, NAPTF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676279 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters
Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev
Abstract:
Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.Keywords: Lexicon of disasters, modelling, Petri nets, text annotation, social disasters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1160278 A Case Study on Optimization of Contractor’s Financing through Allocation of Subcontractors
Authors: Helen S. Ghali, Engy Serag, A. Samer Ezeldin
Abstract:
In many countries, the construction industry relies heavily on outsourcing models in executing their projects and expanding their businesses to fit in the diverse market. Such extensive integration of subcontractors is becoming an influential factor in contractor’s cash flow management. Accordingly, subcontractors’ financial terms are important phenomena and pivotal components for the well-being of the contractor’s cash flow. The aim of this research is to study the contractor’s cash flow with respect to the owner and subcontractor’s payment management plans, considering variable advance payment, payment frequency, and lag and retention policies. The model is developed to provide contractors with a decision support tool that can assist in selecting the optimum subcontracting plan to minimize the contractor’s financing limits and optimize the profit values. The model is built using Microsoft Excel VBA coding, and the genetic algorithm is utilized as the optimization tool. Three objective functions are investigated, which are minimizing the highest negative overdraft value, minimizing the net present worth of overdraft, and maximizing the project net profit. The model is validated on a full-scale project which includes both self-performed and subcontracted work packages. The results show potential outputs in optimizing the contractor’s negative cash flow values and, in the meantime, assisting contractors in selecting suitable subcontractors to achieve the objective function.
Keywords: Cash flow optimization, payment plan, procurement management, subcontracting plan.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 211277 Hybrid Advanced Oxidative Pretreatment of Complex Industrial Effluent for Biodegradability Enhancement
Authors: K. Paradkar, S. N. Mudliar, A. Sharma, A. B. Pandit, R. A. Pandey
Abstract:
The study explores the hybrid combination of Hydrodynamic Cavitation (HC) and Subcritical Wet Air Oxidation-based pretreatment of complex industrial effluent to enhance the biodegradability selectively (without major COD destruction) to facilitate subsequent enhanced downstream processing via anaerobic or aerobic biological treatment. Advanced oxidation based techniques can be less efficient as standalone options and a hybrid approach by combining Hydrodynamic Cavitation (HC), and Wet Air Oxidation (WAO) can lead to a synergistic effect since both the options are based on common free radical mechanism. The HC can be used for initial turbulence and generation of hotspots which can begin the free radical attack and this agitating mixture then can be subjected to less intense WAO since initial heat (to raise the activation energy) can be taken care by HC alone. Lab-scale venturi-based hydrodynamic cavitation and wet air oxidation reactor with biomethanated distillery wastewater (BMDWW) as a model effluent was examined for establishing the proof-of-concept. The results indicated that for a desirable biodegradability index (BOD: COD - BI) enhancement (up to 0.4), the Cavitation (standalone) pretreatment condition was: 5 bar and 88 min reaction time with a COD reduction of 36 % and BI enhancement of up to 0.27 (initial BI - 0.17). The optimum WAO condition (standalone) was: 150oC, 6 bar and 30 minutes with 31% COD reduction and 0.33 BI. The hybrid pretreatment (combined Cavitation + WAO) worked out to be 23.18 min HC (at 5 bar) followed by 30 min WAO at 150oC, 6 bar, at which around 50% COD was retained yielding a BI of 0.55. FTIR & NMR analysis of pretreated effluent indicated dissociation and/or reorientation of complex organic compounds in untreated effluent to simpler organic compounds post-pretreatment.
Keywords: BI, hybrid, hydrodynamic cavitation, wet air oxidation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760276 A Discrete Element Method Centrifuge Model of Monopile under Cyclic Lateral Loads
Authors: Nuo Duan, Yi Pik Cheng
Abstract:
This paper presents the data of a series of two-dimensional Discrete Element Method (DEM) simulations of a large-diameter rigid monopile subjected to cyclic loading under a high gravitational force. At present, monopile foundations are widely used to support the tall and heavy wind turbines, which are also subjected to significant from wind and wave actions. A safe design must address issues such as rotations and changes in soil stiffness subject to these loadings conditions. Design guidance on the issue is limited, so are the availability of laboratory and field test data. The interpretation of these results in sand, such as the relation between loading and displacement, relies mainly on empirical correlations to pile properties. Regarding numerical models, most data from Finite Element Method (FEM) can be found. They are not comprehensive, and most of the FEM results are sensitive to input parameters. The micro scale behaviour could change the mechanism of the soil-structure interaction. A DEM model was used in this paper to study the cyclic lateral loads behaviour. A non-dimensional framework is presented and applied to interpret the simulation results. The DEM data compares well with various set of published experimental centrifuge model test data in terms of lateral deflection. The accumulated permanent pile lateral displacements induced by the cyclic lateral loads were found to be dependent on the characteristics of the applied cyclic load, such as the extent of the loading magnitudes and directions.Keywords: Cyclic loading, DEM, numerical modelling, sands.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1713275 Study of Temperature Changes in Fars Province
Authors: A. Gandomkar, R. Dehghani
Abstract:
Climate change is a phenomenon has been based on the available evidence from a very long time ago and now its existence is very probable. The speed and nature of climate parameters changes at the middle of twentieth century has been different and its quickness more than the before and its trend changed to some extent comparing to the past. Climate change issue now regarded as not only one of the most common scientific topic but also a social political one, is not a new issue. Climate change is a complicated atmospheric oceanic phenomenon on a global scale and long-term. Precipitation pattern change, fast decrease of snowcovered resources and its rapid melting, increased evaporation, the occurrence of destroying floods, water shortage crisis, severe reduction at the rate of harvesting agricultural products and, so on are all the significant of climate change. To cope with this phenomenon, its consequences and events in which public instruction is the most important but it may be climate that no significant cant and effective action has been done so far. The present article is included a part of one surrey about climate change in Fars. The study area having annually mean temperature 14 and precipitation 320 mm .23 stations inside the basin with a common 37 year statistical period have been applied to the meteorology data (1974-2010). Man-kendal and change factor methods are two statistical methods, applying them, the trend of changes and the annual mean average temperature and the annual minimum mean temperature were studied by using them. Based on time series for each parameter, the annual mean average temperature and the mean of annual maximum temperature have a rising trend so that this trend is clearer to the mean of annual maximum temperature.Keywords: Climate change, Coefficient Variation, Fars province, Man-Kendal method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1917274 Feature Reduction of Nearest Neighbor Classifiers using Genetic Algorithm
Authors: M. Analoui, M. Fadavi Amiri
Abstract:
The design of a pattern classifier includes an attempt to select, among a set of possible features, a minimum subset of weakly correlated features that better discriminate the pattern classes. This is usually a difficult task in practice, normally requiring the application of heuristic knowledge about the specific problem domain. The selection and quality of the features representing each pattern have a considerable bearing on the success of subsequent pattern classification. Feature extraction is the process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy. Many current feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and increasing classification efficiency, it does not necessarily reduce the number of features that must be measured since each new feature may be a linear combination of all of the features in the original pattern vector. In this paper a new approach is presented to feature extraction in which feature selection, feature extraction, and classifier training are performed simultaneously using a genetic algorithm. In this approach each feature value is first normalized by a linear equation, then scaled by the associated weight prior to training, testing, and classification. A knn classifier is used to evaluate each set of feature weights. The genetic algorithm optimizes a vector of feature weights, which are used to scale the individual features in the original pattern vectors in either a linear or a nonlinear fashion. By this approach, the number of features used in classifying can be finely reduced.Keywords: Feature reduction, genetic algorithm, pattern classification, nearest neighbor rule classifiers (k-NNR).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770273 A State Aggregation Approach to Singularly Perturbed Markov Reward Processes
Authors: Dali Zhang, Baoqun Yin, Hongsheng Xi
Abstract:
In this paper, we propose a single sample path based algorithm with state aggregation to optimize the average rewards of singularly perturbed Markov reward processes (SPMRPs) with a large scale state spaces. It is assumed that such a reward process depend on a set of parameters. Differing from the other kinds of Markov chain, SPMRPs have their own hierarchical structure. Based on this special structure, our algorithm can alleviate the load in the optimization for performance. Moreover, our method can be applied on line because of its evolution with the sample path simulated. Compared with the original algorithm applied on these problems of general MRPs, a new gradient formula for average reward performance metric in SPMRPs is brought in, which will be proved in Appendix, and then based on these gradients, the schedule of the iteration algorithm is presented, which is based on a single sample path, and eventually a special case in which parameters only dominate the disturbance matrices will be analyzed, and a precise comparison with be displayed between our algorithm with the old ones which is aim to solve these problems in general Markov reward processes. When applied in SPMRPs, our method will approach a fast pace in these cases. Furthermore, to illustrate the practical value of SPMRPs, a simple example in multiple programming in computer systems will be listed and simulated. Corresponding to some practical model, physical meanings of SPMRPs in networks of queues will be clarified.Keywords: Singularly perturbed Markov processes, Gradient of average reward, Differential reward, State aggregation, Perturbed close network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641272 Destination Decision Model for Cruising Taxis Based on Embedding Model
Authors: Kazuki Kamada, Haruka Yamashita
Abstract:
In Japan, taxi is one of the popular transportations and taxi industry is one of the big businesses. However, in recent years, there has been a difficult problem of reducing the number of taxi drivers. In the taxi business, mainly three passenger catching methods are applied. One style is "cruising" that drivers catches passengers while driving on a road. Second is "waiting" that waits passengers near by the places with many requirements for taxies such as entrances of hospitals, train stations. The third one is "dispatching" that is allocated based on the contact from the taxi company. Above all, the cruising taxi drivers need the experience and intuition for finding passengers, and it is difficult to decide "the destination for cruising". The strong recommendation system for the cruising taxies supports the new drivers to find passengers, and it can be the solution for the decreasing the number of drivers in the taxi industry. In this research, we propose a method of recommending a destination for cruising taxi drivers. On the other hand, as a machine learning technique, the embedding models that embed the high dimensional data to a low dimensional space is widely used for the data analysis, in order to represent the relationship of the meaning between the data clearly. Taxi drivers have their favorite courses based on their experiences, and the courses are different for each driver. We assume that the course of cruising taxies has meaning such as the course for finding business man passengers (go around the business area of the city of go to main stations) and course for finding traveler passengers (go around the sightseeing places or big hotels), and extract the meaning of their destinations. We analyze the cruising history data of taxis based on the embedding model and propose the recommendation system for passengers. Finally, we demonstrate the recommendation of destinations for cruising taxi drivers based on the real-world data analysis using proposing method.Keywords: Taxi industry, decision making, recommendation system, embedding model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 426271 The Effect of Development of Two-Phase Flow Regimes on the Stability of Gas Lift Systems
Authors: Khalid. M. O. Elmabrok, M. L. Burby, G. G. Nasr
Abstract:
Flow instability during gas lift operation is caused by three major phenomena – the density wave oscillation, the casing heading pressure and the flow perturbation within the two-phase flow region. This paper focuses on the causes and the effect of flow instability during gas lift operation and suggests ways to control it in order to maximise productivity during gas lift operations. A laboratory-scale two-phase flow system to study the effects of flow perturbation was designed and built. The apparatus is comprised of a 2 m long by 66 mm ID transparent PVC pipe with air injection point situated at 0.1 m above the base of the pipe. This is the point where stabilised bubbles were visibly clear after injection. Air is injected into the water filled transparent pipe at different flow rates and pressures. The behavior of the different sizes of the bubbles generated within the two-phase region was captured using a digital camera and the images were analysed using the advanced image processing package. It was observed that the average maximum bubbles sizes increased with the increase in the length of the vertical pipe column from 29.72 to 47 mm. The increase in air injection pressure from 0.5 to 3 bars increased the bubble sizes from 29.72 mm to 44.17 mm and then decreasing when the pressure reaches 4 bars. It was observed that at higher bubble velocity of 6.7 m/s, larger diameter bubbles coalesce and burst due to high agitation and collision with each other. This collapse of the bubbles causes pressure drop and reverse flow within two phase flow and is the main cause of the flow instability phenomena.Keywords: Gas lift instability, bubble forming, bubble collapsing, image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1476270 Advancement of Oscillating Water Column Wave Energy Technologies through Integrated Applications and Alternative Systems
Authors: S. Doyle, G. A. Aggidis
Abstract:
Wave energy converter technologies continue to show good progress in worldwide research. One of the most researched technologies, the Oscillating Water Column (OWC), is arguably one of the most popular categories within the converter technologies due to its robustness, simplicity and versatility. However, the versatility of the OWC is still largely untapped with most deployments following similar trends with respect to applications and operating systems. As the competitiveness of the energy market continues to increase, the demand for wave energy technologies to be innovative also increases. For existing wave energy technologies, this requires identifying areas to diversify for lower costs of energy with respect to applications and synergies or integrated systems. This paper provides a review of all OWCs systems integrated into alternative applications in the past and present. The aspects and variation in their design, deployment and system operation are discussed. Particular focus is given to the Multi-OWCs (M-OWCs) and their great potential to increase capture on a larger scale, especially in synergy applications. It is made clear that these steps need to be taken in order to make wave energy a competitive and viable option in the renewable energy mix as progression to date shows that stand alone single function devices are not economical. Findings reveal that the trend of development is moving toward these integrated applications in order to reduce the Levelised Cost of Energy (LCOE) and will ultimately continue in this direction in efforts to make wave energy a competitive option in the renewable energy mix.
Keywords: Ocean energy, wave energy, oscillating water column, renewable energy, review.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 955269 Simulation Based VLSI Implementation of Fast Efficient Lossless Image Compression System Using Adjusted Binary Code & Golumb Rice Code
Authors: N. Muthukumaran, R. Ravi
Abstract:
The Simulation based VLSI Implementation of FELICS (Fast Efficient Lossless Image Compression System) Algorithm is proposed to provide the lossless image compression and is implemented in simulation oriented VLSI (Very Large Scale Integrated). To analysis the performance of Lossless image compression and to reduce the image without losing image quality and then implemented in VLSI based FELICS algorithm. In FELICS algorithm, which consists of simplified adjusted binary code for Image compression and these compression image is converted in pixel and then implemented in VLSI domain. This parameter is used to achieve high processing speed and minimize the area and power. The simplified adjusted binary code reduces the number of arithmetic operation and achieved high processing speed. The color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Although VLSI based FELICS Algorithm provides effective solution for hardware architecture design for regular pipelining data flow parallelism with four stages. With two level parallelisms, consecutive pixels can be classified into even and odd samples and the individual hardware engine is dedicated for each one. This method can be further enhanced by multilevel parallelisms.
Keywords: Image compression, Pixel, Compression Ratio, Adjusted Binary code, Golumb Rice code, High Definition display, VLSI Implementation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2074268 A Failure Criterion for Unsupported Boreholes in Poorly Cemented Granular Formations
Authors: Sam S. Hashemi
Abstract:
The breakage of bonding between sand particles and their dislodgment from the borehole wall are among the main factors resulting in a borehole failure in poorly cemented granular formations. The grain debonding usually precedes the borehole failure and it can be considered as a sign that the onset of the borehole collapse is imminent. Detecting the bonding breakage point and introducing an appropriate failure criterion will play an important role in borehole stability analysis. To study the influence of different factors on the initiation of sand bonding breakage at the borehole wall, a series of laboratory tests was designed and conducted on poorly cemented sand samples. The total absorbed strain energy per volume of material up to the point of the observed particle debonding was computed. The results indicated that the particle bonding breakage point at the borehole wall was reached both before and after the peak strength of the thick-walled hollow cylinder specimens depending on the stress path and cement content. Three different cement contents and two borehole sizes were investigated to study the influence of the bonding strength and scale on the particle dislodgment. Test results showed that the stress path has a significant influence on the onset of the sand bonding breakage. It was shown that for various stress paths, there is a near linear relationship between the absorbed energy and the normal effective mean stress.Keywords: Borehole stability, experimental studies, total strain energy, poorly cemented sands, particle bonding breakage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1313267 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer
Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved
Abstract:
Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.
Keywords: Computer-aided system, detection, image segmentation, morphology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 549266 Applications of Support Vector Machines on Smart Phone Systems for Emotional Speech Recognition
Authors: Wernhuar Tarng, Yuan-Yuan Chen, Chien-Lung Li, Kun-Rong Hsie, Mingteh Chen
Abstract:
An emotional speech recognition system for the applications on smart phones was proposed in this study to combine with 3G mobile communications and social networks to provide users and their groups with more interaction and care. This study developed a mechanism using the support vector machines (SVM) to recognize the emotions of speech such as happiness, anger, sadness and normal. The mechanism uses a hierarchical classifier to adjust the weights of acoustic features and divides various parameters into the categories of energy and frequency for training. In this study, 28 commonly used acoustic features including pitch and volume were proposed for training. In addition, a time-frequency parameter obtained by continuous wavelet transforms was also used to identify the accent and intonation in a sentence during the recognition process. The Berlin Database of Emotional Speech was used by dividing the speech into male and female data sets for training. According to the experimental results, the accuracies of male and female test sets were increased by 4.6% and 5.2% respectively after using the time-frequency parameter for classifying happy and angry emotions. For the classification of all emotions, the average accuracy, including male and female data, was 63.5% for the test set and 90.9% for the whole data set.Keywords: Smart phones, emotional speech recognition, socialnetworks, support vector machines, time-frequency parameter, Mel-scale frequency cepstral coefficients (MFCC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846265 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique
Authors: Mohammad A. Khasawneh
Abstract:
Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure.
The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab.
Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.
Keywords: Friction, Image Analysis, Polishing, Statistical Analysis, Texture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2564264 Assessment and Uncertainty Analysis of ROSA/LSTF Test on Pressurized Water Reactor 1.9% Vessel Upper Head Small-Break Loss-of-Coolant Accident
Authors: Takeshi Takeda
Abstract:
An experiment utilizing the ROSA/LSTF (rig of safety assessment/large-scale test facility) simulated a 1.9% vessel upper head small-break loss-of-coolant accident with an accident management (AM) measure under the total failure of high-pressure injection system of emergency core cooling system in a pressurized water reactor. Steam generator (SG) secondary-side depressurization on the AM measure was started by fully opening relief valves in both SGs when the maximum core exit temperature rose to 623 K. A large increase took place in the cladding surface temperature of simulated fuel rods on account of a late and slow response of core exit thermocouples during core boil-off. The author analyzed the LSTF test by reference to the matrix of an integral effect test for the validation of a thermal-hydraulic system code. Problems remained in predicting the primary coolant distribution and the core exit temperature with the RELAP5/MOD3.3 code. The uncertainty analysis results of the RELAP5 code confirmed that the sample size with respect to the order statistics influences the value of peak cladding temperature with a 95% probability at a 95% confidence level, and the Spearman’s rank correlation coefficient.
Keywords: LSTF, LOCA, uncertainty analysis, RELAP5.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 732263 Coupling Heat and Mass Transfer for Hydrogen-Assisted Self-Ignition Behaviors of Propane-Air Mixtures in Catalytic Micro-Channels
Authors: Junjie Chen, Deguang Xu
Abstract:
Transient simulation of the hydrogen-assisted self-ignition of propane-air mixtures were carried out in platinum-coated micro-channels from ambient cold-start conditions, using a two-dimensional model with reduced-order reaction schemes, heat conduction in the solid walls, convection and surface radiation heat transfer. The self-ignition behavior of hydrogen-propane mixed fuel is analyzed and compared with the heated feed case. Simulations indicate that hydrogen can successfully cause self-ignition of propane-air mixtures in catalytic micro-channels with a 0.2 mm gap size, eliminating the need for startup devices. The minimum hydrogen composition for propane self-ignition is found to be in the range of 0.8-2.8% (on a molar basis), and increases with increasing wall thermal conductivity, and decreasing inlet velocity or propane composition. Higher propane-air ratio results in earlier ignition. The ignition characteristics of hydrogen-assisted propane qualitatively resemble the selectively inlet feed preheating mode. Transient response of the mixed hydrogen- propane fuel reveals sequential ignition of propane followed by hydrogen. Front-end propane ignition is observed in all cases. Low wall thermal conductivities cause earlier ignition of the mixed hydrogen-propane fuel, subsequently resulting in low exit temperatures. The transient-state behavior of this micro-scale system is described, and the startup time and minimization of hydrogen usage are discussed.
Keywords: Micro-combustion, Self-ignition, Hydrogen addition, Heat transfer, Catalytic combustion, Transient simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888262 Evaluating Efficiency of Nina Distribution Company Using Window Data Envelopment Analysis and Malmquist Index
Authors: Hossein Taherian Far, Ali Bazaee
Abstract:
Achieving continuous sustained economic growth and following economic development can be the target for all countries which are looking for it. In this regard, distribution industry plays an important role in growth and development of any nation. So, estimating the efficiency and productivity of the so called industry and identifying factors influencing it, is very necessary. The objective of the present study is to measure the efficiency and productivity of seven branches of Nina Distribution Company using window data envelopment analysis and Malmquist productivity index from spring 2013 to summer 2015. In this study, using criteria of fixed assets, payroll personnel, operating costs and duration of collection of receivables were selected as inputs and people and net sales, gross profit and percentage of coverage to customers were selected as outputs. Then, the process of performance window data envelopment analysis was driven and process efficiency has been measured using Malmquist index. The results indicate that the average technical efficiency of window Data Envelopment Analysis (DEA) model and fluctuating trend is sustainable. But the average management efficiency in window DEA model is related with negative growth (decline) of about 13%. The mean scale efficiency in all windows, except in the second one which is faced with 8%, shows growth of 18% compared to the first window. On the other hand, the mean change in total factor productivity in all branches of the industry shows average negative growth (decrease) of 12% which are the result of a negative change in technology.
Keywords: Nina Distribution Company branches, window data envelopment analysis, Malmquist productivity index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1164261 Waste-Based Surface Modification to Enhance Corrosion Resistance of Aluminium Bronze Alloy
Authors: Wilson Handoko, Farshid Pahlevani, Isha Singla, Himanish Kumar, Veena Sahajwalla
Abstract:
Aluminium bronze alloys are well known for their superior abrasion, tensile strength and non-magnetic properties, due to the co-presence of iron (Fe) and aluminium (Al) as alloying elements and have been commonly used in many industrial applications. However, continuous exposure to the marine environment will accelerate the risk of a tendency to Al bronze alloys parts failures. Although a higher level of corrosion resistance properties can be achieved by modifying its elemental composition, it will come at a price through the complex manufacturing process and increases the risk of reducing the ductility of Al bronze alloy. In this research, the use of ironmaking slag and waste plastic as the input source for surface modification of Al bronze alloy was implemented. Microstructural analysis conducted using polarised light microscopy and scanning electron microscopy (SEM) that is equipped with energy dispersive spectroscopy (EDS). An electrochemical corrosion test was carried out through Tafel polarisation method and calculation of protection efficiency against the base-material was determined. Results have indicated that uniform modified surface which is as the result of selective diffusion process, has enhanced corrosion resistance properties up to 12.67%. This approach has opened a new opportunity to access various industrial utilisations in commercial scale through minimising the dependency on natural resources by transforming waste sources into the protective coating in environmentally friendly and cost-effective ways.
Keywords: Aluminium bronze, waste-based surface modification, Tafel polarisation, corrosion resistance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1057260 Ramp Rate and Constriction Factor Based Dual Objective Economic Load Dispatch Using Particle Swarm Optimization
Authors: Himanshu Shekhar Maharana, S. K .Dash
Abstract:
Economic Load Dispatch (ELD) proves to be a vital optimization process in electric power system for allocating generation amongst various units to compute the cost of generation, the cost of emission involving global warming gases like sulphur dioxide, nitrous oxide and carbon monoxide etc. In this dissertation, we emphasize ramp rate constriction factor based particle swarm optimization (RRCPSO) for analyzing various performance objectives, namely cost of generation, cost of emission, and a dual objective function involving both these objectives through the experimental simulated results. A 6-unit 30 bus IEEE test case system has been utilized for simulating the results involving improved weight factor advanced ramp rate limit constraints for optimizing total cost of generation and emission. This method increases the tendency of particles to venture into the solution space to ameliorate their convergence rates. Earlier works through dispersed PSO (DPSO) and constriction factor based PSO (CPSO) give rise to comparatively higher computational time and less good optimal solution at par with current dissertation. This paper deals with ramp rate and constriction factor based well defined ramp rate PSO to compute various objectives namely cost, emission and total objective etc. and compares the result with DPSO and weight improved PSO (WIPSO) techniques illustrating lesser computational time and better optimal solution.
Keywords: Economic load dispatch, constriction factor based particle swarm optimization, dispersed particle swarm optimization, weight improved particle swarm optimization, ramp rate and constriction factor based particle swarm optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1262259 Fluidised Bed Gasification of Multiple Agricultural Biomass Derived Briquettes
Authors: Rukayya Ibrahim Muazu, Aiduan Li Borrion, Julia A. Stegemann
Abstract:
Biomass briquette gasification is regarded as a promising route for efficient briquette use in energy generation, fuels and other useful chemicals. However, previous research has been focused on briquette gasification in fixed bed gasifiers such as updraft and downdraft gasifiers. Fluidised bed gasifier has the potential to be effectively sized to medium or large scale. This study investigated the use of fuel briquettes produced from blends of rice husks and corn cobs biomass, in a bubbling fluidised bed gasifier. The study adopted a combination of numerical equations and Aspen Plus simulation software, to predict the product gas (syngas) composition base on briquette density and biomass composition (blend ratio of rice husks to corn cobs). The Aspen Plus model was based on an experimentally validated model from the literature. The results based on a briquette size 32 mm diameter and relaxed density range of 500 to 650kg/m3, indicated that fluidisation air required in the gasifier increased with increase in briquette density, and the fluidisation air showed to be the controlling factor compared with the actual air required for gasification of the biomass briquettes. The mass flowrate of CO2 in the predicted syngas composition increased with an increase in air flow, in the gasifier, while CO decreased and H2 was almost constant. The ratio of H2 to CO for various blends of rice husks and corn cobs did not significantly change at the designed process air, but a significant difference of 1.0 was observed between 10/90 and 90/10 % blend of rice husks and corn cobs.Keywords: Briquettes, fluidised bed, gasification, Aspen Plus, syngas.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2556258 Does Material Choice Drive Sustainability of 3D Printing?
Authors: Jeremy Faludi, Zhongyin Hu, Shahd Alrashed, Christopher Braunholz, Suneesh Kaul, Leulekal Kassaye
Abstract:
Environmental impacts of six 3D printers using various materials were compared to determine if material choice drove sustainability, or if other factors such as machine type, machine size, or machine utilization dominate. Cradle-to-grave life-cycle assessments were performed, comparing a commercial-scale FDM machine printing in ABS plastic, a desktop FDM machine printing in ABS, a desktop FDM machine printing in PET and PLA plastics, a polyjet machine printing in its proprietary polymer, an SLA machine printing in its polymer, and an inkjet machine hacked to print in salt and dextrose. All scenarios were scored using ReCiPe Endpoint H methodology to combine multiple impact categories, comparing environmental impacts per part made for several scenarios per machine. Results showed that most printers’ ecological impacts were dominated by electricity use, not materials, and the changes in electricity use due to different plastics was not significant compared to variation from one machine to another. Variation in machine idle time determined impacts per part most strongly. However, material impacts were quite important for the inkjet printer hacked to print in salt: In its optimal scenario, it had up to 1/38th the impacts coreper part as the worst-performing machine in the same scenario. If salt parts were infused with epoxy to make them more physically robust, then much of this advantage disappeared, and material impacts actually dominated or equaled electricity use. Future studies should also measure DMLS and SLS processes / materials.
Keywords: 3D printing, Additive Manufacturing, Sustainability, Life-cycle assessment, Design for Environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3613257 A Cumulative Learning Approach to Data Mining Employing Censored Production Rules (CPRs)
Authors: Rekha Kandwal, Kamal K.Bharadwaj
Abstract:
Knowledge is indispensable but voluminous knowledge becomes a bottleneck for efficient processing. A great challenge for data mining activity is the generation of large number of potential rules as a result of mining process. In fact sometimes result size is comparable to the original data. Traditional data mining pruning activities such as support do not sufficiently reduce the huge rule space. Moreover, many practical applications are characterized by continual change of data and knowledge, thereby making knowledge voluminous with each change. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. Michalski & Winston proposed Censored Production Rules (CPRs), as an extension of production rules, that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence, are tight or there is simply no information available as to whether it holds or not. Thus the 'If P Then D' part of the CPR expresses important information while the Unless C part acts only as a switch changes the polarity of D to ~D. In this paper a scheme based on Dempster-Shafer Theory (DST) interpretation of a CPR is suggested for discovering CPRs from the discovered flat PRs. The discovery of CPRs from flat rules would result in considerable reduction of the already discovered rules. The proposed scheme incrementally incorporates new knowledge and also reduces the size of knowledge base considerably with each episode. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested cumulative learning scheme would be useful in mining data streams.
Keywords: Censored production rules, cumulative learning, data mining, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487256 Optimal and Critical Path Analysis of State Transportation Network Using Neo4J
Authors: Pallavi Bhogaram, Xiaolong Wu, Min He, Onyedikachi Okenwa
Abstract:
A transportation network is a realization of a spatial network, describing a structure which permits either vehicular movement or flow of some commodity. Examples include road networks, railways, air routes, pipelines, and many more. The transportation network plays a vital role in maintaining the vigor of the nation’s economy. Hence, ensuring the network stays resilient all the time, especially in the face of challenges such as heavy traffic loads and large scale natural disasters, is of utmost importance. In this paper, we used the Neo4j application to develop the graph. Neo4j is the world's leading open-source, NoSQL, a native graph database that implements an ACID-compliant transactional backend to applications. The Southern California network model is developed using the Neo4j application and obtained the most critical and optimal nodes and paths in the network using centrality algorithms. The edge betweenness centrality algorithm calculates the critical or optimal paths using Yen's k-shortest paths algorithm, and the node betweenness centrality algorithm calculates the amount of influence a node has over the network. The preliminary study results confirm that the Neo4j application can be a suitable tool to study the important nodes and the critical paths for the major congested metropolitan area.
Keywords: Transportation network, critical path, connectivity reliability, network model, Neo4J application, optimal path, critical path, edge betweenness centrality index, node betweenness centrality index, Yen’s k-shortest paths.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 861255 Co-Administration Effects of Conjugated Linoleic Acid and L-Carnitine on Weight Gain and Biochemical Profile in Diet Induced Obese Rats
Authors: Maryam Nazari, Majid Karandish, Alihossein Saberi
Abstract:
Obesity as a global health challenge motivates pharmaceutical industries to produce anti-obesity drugs. However, effectiveness of these agents is remained unclear. Because of popularity of dietary supplements, the aim of this study was tp investigate the effects of Conjugated Linoleic Acid (CLA) and L-carnitine (LC) on serum glucose, triglyceride, cholesterol and weight changes in diet induced obese rats. 48 male Wistar rats were randomly divided into two groups: Normal fat diet (n=8), and High fat diet (HFD) (n=32). After eight weeks, the second group which was maintained on HFD until the end of study, was subdivided into four categories: a) 500 mg Corn Oil (as control group), b) 500 mg CLA, c) 200 mg LC, d) 500 mg CLA+ 200 mg LC.All doses are planned per kg body weights, which were administered by oral gavage for four weeks. Body weights were measured and recorded weekly by means of a digital scale. At the end of the study, blood samples were collected for biochemical markers measurement. SPSS Version 16 was used for statistical analysis. At the end of 8th week, a significant difference in weight was observed between HFD and NFD group. After 12 weeks, LC significantly reduced weight gain by 4.2%. Trend of weight gain in CLA and CLA+LC groups was insignificantly decelerated. CLA+LC reduced triglyceride level significantly, but just CLA had significant influence on total cholesterol and insignificant decreasing effect on FBS. Our results showed that an obesogenic diet in a relative short time led to obesity and dyslipidemia which can be modified by LC and CLA to some extent.
Keywords: Conjugated linoleic acid, high fat diet, L-carnitine, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 944254 A Comparative Analysis of the Performance of COSMO and WRF Models in Quantitative Rainfall Prediction
Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Mary Nsabagwa, Triphonia Jacob Ngailo, Joachim Reuder, Sch¨attler Ulrich, Musa Semujju
Abstract:
The Numerical weather prediction (NWP) models are considered powerful tools for guiding quantitative rainfall prediction. A couple of NWP models exist and are used at many operational weather prediction centers. This study considers two models namely the Consortium for Small–scale Modeling (COSMO) model and the Weather Research and Forecasting (WRF) model. It compares the models’ ability to predict rainfall over Uganda for the period 21st April 2013 to 10th May 2013 using the root mean square (RMSE) and the mean error (ME). In comparing the performance of the models, this study assesses their ability to predict light rainfall events and extreme rainfall events. All the experiments used the default parameterization configurations and with same horizontal resolution (7 Km). The results show that COSMO model had a tendency of largely predicting no rain which explained its under–prediction. The COSMO model (RMSE: 14.16; ME: -5.91) presented a significantly (p = 0.014) higher magnitude of error compared to the WRF model (RMSE: 11.86; ME: -1.09). However the COSMO model (RMSE: 3.85; ME: 1.39) performed significantly (p = 0.003) better than the WRF model (RMSE: 8.14; ME: 5.30) in simulating light rainfall events. All the models under–predicted extreme rainfall events with the COSMO model (RMSE: 43.63; ME: -39.58) presenting significantly higher error magnitudes than the WRF model (RMSE: 35.14; ME: -26.95). This study recommends additional diagnosis of the models’ treatment of deep convection over the tropics.Keywords: Comparative performance, the COSMO model, the WRF model, light rainfall events, extreme rainfall events.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1543253 Forest Risk and Vulnerability Assessment: A Case Study from East Bokaro Coal Mining Area in India
Authors: Sujata Upgupta, Prasoon Kumar Singh
Abstract:
The expansion of large scale coal mining into forest areas is a potential hazard for the local biodiversity and wildlife. The objective of this study is to provide a picture of the threat that coal mining poses to the forests of the East Bokaro landscape. The vulnerable forest areas at risk have been assessed and the priority areas for conservation have been presented. The forested areas at risk in the current scenario have been assessed and compared with the past conditions using classification and buffer based overlay approach. Forest vulnerability has been assessed using an analytical framework based on systematic indicators and composite vulnerability index values. The results indicate that more than 4 km2 of forests have been lost from 1973 to 2016. Large patches of forests have been diverted for coal mining projects. Forests in the northern part of the coal field within 1-3 km radius around the coal mines are at immediate risk. The original contiguous forests have been converted into fragmented and degraded forest patches. Most of the collieries are located within or very close to the forests thus threatening the biodiversity and hydrology of the surrounding regions. Based on the vulnerability values estimated, it was concluded that more than 90% of the forested grids in East Bokaro are highly vulnerable to mining. The forests in the sub-districts of Bermo and Chandrapura have been identified as the most vulnerable to coal mining activities. This case study would add to the capacity of the forest managers and mine managers to address the risk and vulnerability of forests at a small landscape level in order to achieve sustainable development.
Keywords: Coal mining, forest, indicators, vulnerability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1162252 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery
Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko
Abstract:
In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analyzed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realized via a twoway coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary Lagrangian-Eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analyzed in the study. The axial velocity at normalized position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.
Keywords: Large Eddy Simulation, Fluid Structural Interaction, Constricted Artery, Computational Fluid Dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345251 Optimizing the Components of Grid-Independent Microgrids for Rural Electrification Utilizing Solar Panel and Supercapacitor
Authors: Astiaj Khoramshahi, Hossein Ahmadi Danesh Ashtiani, Ahmad Khoshgard, Hamidreza Damghani, Leila Damghani
Abstract:
Rural electrification rates are generally low in Iran and many parts of the world that lack sustainable renewable energy resources. Many homes are based on polluting solutions such as crude oil and diesel generators for lighting, heating, and charging electrical gadgets. Small-scale portable solar battery packs are accessible to the public; however, they have low capacity and are challenging to be distributed in developing countries. To design a battery-based microgrid power systems, the load profile is one of the key parameters. Additionally, the reliability of the system should be taken into account. A conventional microgrid system can be either AC or coupling DC. Both AC and DC microgrids have advantages and disadvantages depending on their application and can be either connected to the main grid or perform independently. This article proposes a tool for optimal sizing of microgrid-independent systems via respective analysis. To show such an analysis, the type of power generation, number of panels, battery capacity, microgrid size, and group of available consumers should be considered. Therefore, the optimization of different design scenarios is based on number of solar panels and super saving sources, ranges of the depth of discharges, to calculate size and estimate the overall cost. Generally, it is observed that there is an inverse relationship between the depth spectrum of discharge and the solar microgrid costs.
Keywords: Storage, super-storage, grid-independent, economic factors, microgrid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 324