Search results for: Treatment methods
3571 Performance Prediction of a SANDIA 17-m Vertical Axis Wind Turbine Using Improved Double Multiple Streamtube
Authors: Abolfazl Hosseinkhani, Sepehr Sanaye
Abstract:
Different approaches have been used to predict the performance of the vertical axis wind turbines (VAWT), such as experimental, computational fluid dynamics (CFD), and analytical methods. Analytical methods, such as momentum models that use streamtubes, have low computational cost and sufficient accuracy. The double multiple streamtube (DMST) is one of the most commonly used of momentum models, which divide the rotor plane of VAWT into upwind and downwind. In fact, results from the DMST method have shown some discrepancy compared with experiment results; that is because the Darrieus turbine is a complex and aerodynamically unsteady configuration. In this study, analytical-experimental-based corrections, including dynamic stall, streamtube expansion, and finite blade length correction are used to improve the DMST method. Results indicated that using these corrections for a SANDIA 17-m VAWT will lead to improving the results of DMST.
Keywords: Vertical axis wind turbine, analytical, double multiple streamtube, streamtube expansion model, dynamic stall model, finite blade length correction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5863570 Time Organization for Urban Mobility Decongestion: A Methodology for People’s Profile Identification
Authors: Yassamina Berkane, Leïla Kloul, Yoann Demoli
Abstract:
Quality of life, environmental impact, congestion of mobility means, and infrastructures remain significant challenges for urban mobility. Solutions like car sharing, spatial redesign, eCommerce, and autonomous vehicles will likely increase the unit veh-km and the density of cars in urban traffic, thus reducing congestion. However, the impact of such solutions is not clear for researchers. Congestion arises from growing populations that must travel greater distances to arrive at similar locations (e.g., workplaces, schools) during the same time frame (e.g., rush hours). This paper first reviews the research and application cases of urban congestion methods through recent years. Rethinking the question of time, it then investigates people’s willingness and flexibility to adapt their arrival and departure times from workplaces. We use neural networks and methods of supervised learning to apply a methodology for predicting peoples’ intentions from their responses in a questionnaire. We created and distributed a questionnaire to more than 50 companies in the Paris suburb. Obtained results illustrate that our methodology can predict peoples’ intentions to reschedule their activities (work, study, commerce, etc.).
Keywords: Urban mobility, decongestion, machine learning, neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4813569 Bioremediation of Sewage Sludge Contaminated with Fluorene Using a Lipopeptide Biosurfactant
Authors: X. Vecino, J. M. Cruz, A. Moldes
Abstract:
The disposal and the treatment of sewage sludge is an expensive and environmentally complex problem. In this work, a lipopeptide biosurfactant extracted from corn steep liquor was used as ecofriendly and cost-competitive alternative for the mobilization and bioremediation of fluorene in sewage sludge. Results have demonstrated that this biosurfactant has the capability to mobilize fluorene to the aqueous phase, reducing the amount of fluorene in the sewage sludge from 484.4 mg/Kg up to 413.7 mg/Kg and 196.0 mg/Kg after 1 and 27 days respectively. Furthermore, once the fluorene was extracted the lipopeptide biosurfactant contained in the aqueous phase allowed the biodegradation, up to 40.5% of the initial concentration of this polycyclic aromatic hydrocarbon.Keywords: Fluorene, lipopeptide biosurfactant, mobilization, sewage sludge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18283568 A Review on Climate Change and Sustainable Agriculture in Southeast Nigeria
Authors: Jane O. Munonye
Abstract:
Climate change has both negative and positive effects in agricultural production. For agriculture to be sustainable in adverse climate change condition, some natural measures are needed. The issue is to produce more food with available natural resources and reduce the contribution of agriculture to climate change. The study reviewed climate change and sustainable agriculture in southeast Nigeria. Data from the study were from secondary sources. Ten scientific papers were consulted and data for the review were collected from three. The objectives of the paper were as follows: to review the effect of climate change on one major arable crop in southeast Nigeria (yam; Dioscorea rotundata); evident of climate change impact and methods for sustainable agricultural production in adverse weather condition. Some climatic parameter as sunshine, relative humidity and rainfall have negative relationship with yam production and significant at 10% probability. Crop production was predicted to decline by 25% per hectare by 2060 while livestock production has increased the incidence of diseases and pathogens as the major effect to agriculture. Methods for sustainable agriculture and damage of natural resources by climate change were highlighted. Agriculture needs to be transformed as climate changes to enable the sector to be sustainable. There should be a policy in place to facilitate the integration of sustainability in Nigeria agriculture.
Keywords: Agriculture, climate change, sustainability, yam.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16793567 DCGA Based-Transmission Network Expansion Planning Considering Network Adequacy
Authors: H. Shayeghi, M. Mahdavi, H. Haddadian
Abstract:
Transmission network expansion planning (TNEP) is an important component of power system planning that its task is to minimize the network construction and operational cost while satisfying the demand increasing, imposed technical and economic conditions. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, the lines adequacy rate has not been studied after the planning horizon, i.e. when the expanded network misses its adequacy and needs to be expanded again. In this paper, in order to take transmission lines condition after expansion in to account from the line loading view point, the adequacy of transmission network is considered for solution of STNEP problem. To obtain optimal network arrangement, a decimal codification genetic algorithm (DCGA) is being used for minimizing the network construction and operational cost. The effectiveness of the proposed idea is tested on the Garver's six-bus network. The results evaluation reveals that the annual worth of network adequacy has a considerable effect on the network arrangement. In addition, the obtained network, based on the DCGA, has lower investment cost and higher adequacy rate. Thus, the network satisfies the requirements of delivering electric power more safely and reliably to load centers.
Keywords: STNEP Problem, Network Adequacy, DCGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14263566 Development of a Kinetic Model for the Photodegradation of 4-Chlorophenol using a XeBr Excilamp
Authors: M. Gomez, M. D. Murcia, E. Gomez, J. L. Gomez, N. Christofi
Abstract:
Excilamps are new UV sources with great potential for application in wastewater treatment. In the present work, a XeBr excilamp emitting radiation at 283 nm has been used for the photodegradation of 4-chlorophenol within a range of concentrations from 50 to 500 mg L-1. Total removal of 4-chlorophenol was achieved for all concentrations assayed. The two main photoproduct intermediates formed along the photodegradation process, benzoquinone and hydroquinone, although not being completely removed, remain at very low residual concentrations. Such concentrations are insignificant compared to the 4-chlorophenol initial ones and non-toxic. In order to simulate the process and scaleup, a kinetic model has been developed and validated from the experimental data.Keywords: 4-chlorophenol, excilamps, kinetic model, photodegradation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13853565 Heat Transfer Characteristics and Fluid Flow past Staggered Flat-Tube Bank Using CFD
Authors: Zeinab Sayed Abdel-Rehim
Abstract:
A computational fluid dynamic (CFD-Fluent 6.2) for two-dimensional fluid flow is applied to predict the pressure drop and heat transfer characteristics of laminar and turbulent flow past staggered flat-tube bank. Effect of aspect ratio ((H/D)/(L/D)) on pressure drop, temperature, and velocity contour for laminar and turbulent flow over staggered flat-tube bank is studied. The theoretical results of the present models are compared with previously published experimental data of different authors. Satisfactory agreement is demonstrated. Also, the comparison between the present study and others analytical methods for the Re number with Nu number is done. The results show as the Reynolds number increases the maximum velocity in the passage between the upper and lower tubes increases. The comparisons show a fair agreement especially in the turbulent flow region. The good agreement of the data of this work with these recommended analytical methods validates the current study.
Keywords: Aspect ratio ((H/D)/(L/D)), CFD, fluid flow, heat transfer, staggered arrangement, tube bank, and turbulent flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37533564 Coherence Analysis for Epilepsy Patients: An MEG Study
Authors: S. Ge, T. Wu, HY. Tang, X. Xiao, K. Iramina, W. Wu
Abstract:
It is crucial to quantitatively evaluate the treatment of epilepsy patients. This study was undertaken to test the hypothesis that compared to the healthy control subjects, the epilepsy patients have abnormal resting-state connectivity. In this study, we used the imaginary part of coherency to measure the resting-state connectivity. The analysis results shown that compared to the healthy control subjects, epilepsy patients tend to have abnormal rhythm brain connectivity over their epileptic focus.Keywords: Coherence, connectivity, resting-state, epilepsy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16783563 Effectual Reversible Watermarking Method for Hide the Patient Details in Brain Tumor Image
Authors: K. Amudha, C. Nelson Kennedy Babu, S. Balu
Abstract:
The security of the medical images and its related data is the major research area which is to be concentrated in today’s era. Security in the medical image indicates that the physician may hide patients’ related data in the medical image and transfer it safely to a defined location using reversible watermarking. Many reversible watermarking methods had proposed over the decade. This paper enhances the security level in brain tumor images to hide the patient’s detail, which has to be conferred with other physician’s suggestions. The details or the information will be hidden in Non-ROI area of the image by using the block cipher algorithm. The block cipher uses different keys to extract the details that are difficult for the intruder to detect all the keys and to spot the details, which are the key advantage of this method. The ROI is the tumor area and Non-ROI is the area rest of ROI. The Non-ROI should not be spoiled in any cause and the details in the Non-ROI should be extracted correctly. The reversible watermarking method proposed in this paper performs well when compared to existing methods in the process of extraction of an original image and providing information security.Keywords: Brain tumor images, Block Cipher, Reversible watermarking, ROI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13373562 The removal of Ni, Cu and Fe from a Mixed Metal System using Sodium Hypophosphite as a Reducing Agent
Authors: Promise Sethembiso Ngema, Freeman Ntuli, Mohamed Belaid
Abstract:
The main objective of this study was to remove and recover Ni, Cu and Fe from a mixed metal system using sodium hypophosphite as a reducing agent and nickel powder as seeding material. The metal systems studied consisted of Ni-Cu, Ni-Fe and Ni-Cu-Fe solutions. A 5 L batch reactor was used to conduct experiments where 100 mg/l of each respective metal was used. It was found that the metals were reduced to their elemental form with removal efficiencies of over 80%. The removal efficiency decreased in the order Fe>Ni>Cu. The metal powder obtained contained between 97-99% Ni and was almost spherical and porous. Size enlargement by aggregation was the dominant particulate process.
Keywords: crystallization, electroless plating, heavy metal removal, wastewater treatment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20393561 Optimizing Resource Allocation and Indoor Location Using Bluetooth Low Energy
Authors: Néstor Álvarez-Díaz, Pino Caballero-Gil, Héctor Reboso-Morales, Francisco Martín-Fernández
Abstract:
The recent tendency of ”Internet of Things” (IoT) has developed in the last years, causing the emergence of innovative communication methods among multiple devices. The appearance of Bluetooth Low Energy (BLE) has allowed a push to IoT in relation to smartphones. In this moment, a set of new applications related to several topics like entertainment and advertisement has begun to be developed but not much has been done till now to take advantage of the potential that these technologies can offer on many business areas and in everyday tasks. In the present work, the application of BLE technology and smartphones is proposed on some business areas related to the optimization of resource allocation in huge facilities like airports. An indoor location system has been developed through triangulation methods with the use of BLE beacons. The described system can be used to locate all employees inside the building in such a way that any task can be automatically assigned to a group of employees. It should be noted that this system cannot only be used to link needs with employees according to distances, but it also takes into account other factors like occupation level or category. In addition, it has been endowed with a security system to manage business and personnel sensitive data. The efficiency of communications is another essential characteristic that has been taken into account in this work.Keywords: Bluetooth Low Energy, indoor location, resource assignment, smartphones.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16653560 Towards the Use of Software Product Metrics as an Indicator for Measuring Mobile Applications Power Consumption
Authors: Ching Kin Keong, Koh Tieng Wei, Abdul Azim Abd. Ghani, Khaironi Yatim Sharif
Abstract:
Maintaining factory default battery endurance rate over time in supporting huge amount of running applications on energy-restricted mobile devices has created a new challenge for mobile applications developer. While delivering customers’ unlimited expectations, developers are barely aware of efficient use of energy from the application itself. Thus, developers need a set of valid energy consumption indicators in assisting them to develop energy saving applications. In this paper, we present a few software product metrics that can be used as an indicator to measure energy consumption of Android-based mobile applications in the early of design stage. In particular, Trepn Profiler (Power profiling tool for Qualcomm processor) has used to collect the data of mobile application power consumption, and then analyzed for the 23 software metrics in this preliminary study. The results show that McCabe cyclomatic complexity, number of parameters, nested block depth, number of methods, weighted methods per class, number of classes, total lines of code and method lines have direct relationship with power consumption of mobile application.Keywords: Battery endurance, software metrics, mobile application, power consumption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19433559 Large Strain Compression-Tension Behavior of AZ31B Rolled Sheet in the Rolling Direction
Authors: A. Yazdanmehr, H. Jahed
Abstract:
Being made with the lightest commercially available industrial metal, Magnesium (Mg) alloys are of interest for light-weighting. Expanding their application to different material processing methods requires Mg properties at large strains. Several room-temperature processes such as shot and laser peening and hole cold expansion need compressive large strain data. Two methods have been proposed in the literature to obtain the stress-strain curve at high strains: 1) anti-buckling guides and 2) small cubic samples. In this paper, an anti-buckling fixture is used with the help of digital image correlation (DIC) to obtain the compression-tension (C-T) of AZ31B-H24 rolled sheet at large strain values of up to 10.5%. The effect of the anti-bucking fixture on stress-strain curves is evaluated experimentally by comparing the results with those of the compression tests of cubic samples. For testing cubic samples, a new fixture has been designed to increase the accuracy of testing cubic samples with DIC strain measurements. Results show a negligible effect of anti-buckling on stress-strain curves, specifically at high strain values.
Keywords: Large strain, compression-tension, loading-unloading, Mg alloys.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7843558 Actionable Rules: Issues and New Directions
Authors: Harleen Kaur
Abstract:
Knowledge Discovery in Databases (KDD) is the process of extracting previously unknown, hidden and interesting patterns from a huge amount of data stored in databases. Data mining is a stage of the KDD process that aims at selecting and applying a particular data mining algorithm to extract an interesting and useful knowledge. It is highly expected that data mining methods will find interesting patterns according to some measures, from databases. It is of vital importance to define good measures of interestingness that would allow the system to discover only the useful patterns. Measures of interestingness are divided into objective and subjective measures. Objective measures are those that depend only on the structure of a pattern and which can be quantified by using statistical methods. While, subjective measures depend only on the subjectivity and understandability of the user who examine the patterns. These subjective measures are further divided into actionable, unexpected and novel. The key issues that faces data mining community is how to make actions on the basis of discovered knowledge. For a pattern to be actionable, the user subjectivity is captured by providing his/her background knowledge about domain. Here, we consider the actionability of the discovered knowledge as a measure of interestingness and raise important issues which need to be addressed to discover actionable knowledge.
Keywords: Data Mining Community, Knowledge Discovery inDatabases (KDD), Interestingness, Subjective Measures, Actionability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19423557 A Testbed for the Experiments Performed in Missing Value Treatments
Authors: Dias de J. C. Lilian, Lobato M. F. Fábio, de Santana L. Ádamo
Abstract:
The occurrence of missing values in database is a serious problem for Data Mining tasks, responsible for degrading data quality and accuracy of analyses. In this context, the area has shown a lack of standardization for experiments to treat missing values, introducing difficulties to the evaluation process among different researches due to the absence in the use of common parameters. This paper proposes a testbed intended to facilitate the experiments implementation and provide unbiased parameters using available datasets and suited performance metrics in order to optimize the evaluation and comparison between the state of art missing values treatments.
Keywords: Data imputation, data mining, missing values treatment, testbed.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15133556 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic
Authors: Diogen Babuc
Abstract:
The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigen`ere. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e. shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b + 1, it will return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character is not used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it is questionable if it works better than the other methods, from the point of view of execution time and storage space.
Keywords: Ciphering and deciphering, Authentic Algorithm, Polyalphabetic Cipher, Random Key, methods comparison.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963555 Influence of Boron Doping and Thermal Treatment on Internal Friction of Monocrystalline Si1-xGex(x≤0,02) Alloys
Authors: I. Kurashvili, G. Darsavelidze, G. Bokuchava, A. Sichinava, I. Tabatadze
Abstract:
The impact of boron doping on the internal friction (IF) and shear modulus temperature spectra of Si1-xGex(x≤0,02) monocrsytals has been investigated by reverse torsional pendulum oscillations characteristics testing. At room temperatures, microhardness and indentation modulus of the same specimens have been measured by dynamic ultra microhardness tester. It is shown that boron doping causes two kinds effect: At low boron concentration (~1015 cm-3) significant strengthening is revealed, while at the high boron concentration (~1019 cm-3) strengthening effect and activation characteristics of relaxation origin IF processes are reduced.
Keywords: Dislocation, internal friction, microhardness, relaxation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10063554 Real-Time Detecting Concentration of Mycobacterium Tuberculosis by CNTFET Biosensor
Authors: Hsiao-Wei Wang, Jung-Tang Huang, Chun-Chiang Lin
Abstract:
Aptamers are useful tools in microorganism researches, diagnoses, and treatment. Aptamers are specific target molecules formed by oligonucleic acid molecules, and are not decomposed by alcohol. Aptamers used to detect Mycobacterium tuberculosis (MTB) have been proved to have specific affinity to the outer membrane proteins of MTB. This article presents a biosensor chip set with aptamers for early detection of MTB with high specificity and sensitivity, even in very low concentration. Meanwhile, we have already made a modified hydrophobic facial mask module with internal rendering hydrophobic for effectively collecting M. tuberculosis.
Keywords: Aptamers, CNTFET, Mycobacterium tuberculosis, early detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20003553 Protein Secondary Structure Prediction Using Parallelized Rule Induction from Coverings
Authors: Leong Lee, Cyriac Kandoth, Jennifer L. Leopold, Ronald L. Frank
Abstract:
Protein 3D structure prediction has always been an important research area in bioinformatics. In particular, the prediction of secondary structure has been a well-studied research topic. Despite the recent breakthrough of combining multiple sequence alignment information and artificial intelligence algorithms to predict protein secondary structure, the Q3 accuracy of various computational prediction algorithms rarely has exceeded 75%. In a previous paper [1], this research team presented a rule-based method called RT-RICO (Relaxed Threshold Rule Induction from Coverings) to predict protein secondary structure. The average Q3 accuracy on the sample datasets using RT-RICO was 80.3%, an improvement over comparable computational methods. Although this demonstrated that RT-RICO might be a promising approach for predicting secondary structure, the algorithm-s computational complexity and program running time limited its use. Herein a parallelized implementation of a slightly modified RT-RICO approach is presented. This new version of the algorithm facilitated the testing of a much larger dataset of 396 protein domains [2]. Parallelized RTRICO achieved a Q3 score of 74.6%, which is higher than the consensus prediction accuracy of 72.9% that was achieved for the same test dataset by a combination of four secondary structure prediction methods [2].Keywords: data mining, protein secondary structure prediction, parallelization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15963552 A Medical Images Based Retrieval System using Soft Computing Techniques
Authors: Pardeep Singh, Sanjay Sharma
Abstract:
Content-Based Image Retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of difering sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever increasing quantities and used for diagnostics and therapy. In several articles, content based access to medical images for supporting clinical decision making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into Picture Archiving and Communication Systems (PACS) have been created. This paper gives an overview of soft computing techniques. New research directions are being defined that can prove to be useful. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text based retrieval methods as they exist at the moment.Keywords: CBIR, GA, Rough sets, CBMIR
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26073551 Changes in the Research of Crisis
Authors: M. Mikusova
Abstract:
Thanks to the interdisciplinary nature of crises, the position of researchers in that field is rather difficult. Very often the traditional methods of research cannot be applied there. The article is aimed at the changes in crises research. It describes the substance of individual changes and emphasizes the shift in research approaches to the crisis.Keywords: crisis, change, research
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11943550 Comparative Study of Evolutionary Model and Clustering Methods in Circuit Partitioning Pertaining to VLSI Design
Authors: K. A. Sumitra Devi, N. P. Banashree, Annamma Abraham
Abstract:
Partitioning is a critical area of VLSI CAD. In order to build complex digital logic circuits its often essential to sub-divide multi -million transistor design into manageable Pieces. This paper looks at the various partitioning techniques aspects of VLSI CAD, targeted at various applications. We proposed an evolutionary time-series model and a statistical glitch prediction system using a neural network with selection of global feature by making use of clustering method model, for partitioning a circuit. For evolutionary time-series model, we made use of genetic, memetic & neuro-memetic techniques. Our work focused in use of clustering methods - K-means & EM methodology. A comparative study is provided for all techniques to solve the problem of circuit partitioning pertaining to VLSI design. The performance of all approaches is compared using benchmark data provided by MCNC standard cell placement benchmark net lists. Analysis of the investigational results proved that the Neuro-memetic model achieves greater performance then other model in recognizing sub-circuits with minimum amount of interconnections between them.
Keywords: VLSI, circuit partitioning, memetic algorithm, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16393549 O-Functionalized CNT Mediated CO Hydro-Deoxygenation and Chain Growth
Authors: K. Mondal, S. Talapatra, M. Terrones, S. Pokhrel, C. Frizzel, B. Sumpter, V. Meunier, A. L. Elias
Abstract:
Worldwide energy independence is reliant on the ability to leverage locally available resources for fuel production. Recently, syngas produced through gasification of carbonaceous materials provided a gateway to a host of processes for the production of various chemicals including transportation fuels. The basis of the production of gasoline and diesel-like fuels is the Fischer Tropsch Synthesis (FTS) process: A catalyzed chemical reaction that converts a mixture of carbon monoxide (CO) and hydrogen (H2) into long chain hydrocarbons. Until now, it has been argued that only transition metal catalysts (usually Co or Fe) are active toward the CO hydrogenation and subsequent chain growth in the presence of hydrogen. In this paper, we demonstrate that carbon nanotube (CNT) surfaces are also capable of hydro-deoxygenating CO and producing long chain hydrocarbons similar to that obtained through the FTS but with orders of magnitude higher conversion efficiencies than the present state-of-the-art FTS catalysts. We have used advanced experimental tools such as XPS and microscopy techniques to characterize CNTs and identify C-O functional groups as the active sites for the enhanced catalytic activity. Furthermore, we have conducted quantum Density Functional Theory (DFT) calculations to confirm that C-O groups (inherent on CNT surfaces) could indeed be catalytically active towards reduction of CO with H2, and capable of sustaining chain growth. The DFT calculations have shown that the kinetically and thermodynamically feasible route for CO insertion and hydro-deoxygenation are different from that on transition metal catalysts. Experiments on a continuous flow tubular reactor with various nearly metal-free CNTs have been carried out and the products have been analyzed. CNTs functionalized by various methods were evaluated under different conditions. Reactor tests revealed that the hydrogen pre-treatment reduced the activity of the catalysts to negligible levels. Without the pretreatment, the activity for CO conversion as found to be 7 µmol CO/g CNT/s. The O-functionalized samples showed very activities greater than 85 µmol CO/g CNT/s with nearly 100% conversion. Analyses show that CO hydro-deoxygenation occurred at the C-O/O-H functional groups. It was found that while the products were similar to FT products, differences in selectivities were observed which, in turn, was a result of a different catalytic mechanism. These findings now open a new paradigm for CNT-based hydrogenation catalysts and constitute a defining point for obtaining clean, earth abundant, alternative fuels through the use of efficient and renewable catalyst.
Keywords: CNT, CO hydro-deoxygenation, DFT, liquid fuels, XPS, XTL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7773548 Health Monitoring of Power Transformers by Dissolved Gas Analysis using Regression Method and Study the Effect of Filtration on Oil
Authors: Anjali Chatterjee, Nirmal Kumar Roy
Abstract:
Economically transformers constitute one of the largest investments in a Power system. For this reason, transformer condition assessment and management is a high priority task. If a transformer fails, it would have a significant negative impact on revenue and service reliability. Monitoring the state of health of power transformers has traditionally been carried out using laboratory Dissolved Gas Analysis (DGA) tests performed at periodic intervals on the oil sample, collected from the transformers. DGA of transformer oil is the single best indicator of a transformer-s overall condition and is a universal practice today, which started somewhere in the 1960s. Failure can occur in a transformer due to different reasons. Some failures can be limited or prevented by maintenance. Oil filtration is one of the methods to remove the dissolve gases and prevent the deterioration of the oil. In this paper we analysis the DGA data by regression method and predict the gas concentration in the oil in the future. We bring about a comparative study of different traditional methods of regression and the errors generated out of their predictions. With the help of these data we can deduce the health of the transformer by finding the type of fault if it has occurred or will occur in future. Additional in this paper effect of filtration on the transformer health is highlight by calculating the probability of failure of a transformer with and without oil filtrating.
Keywords: Power Transformers, Dissolve gas Analysis, Regression method, Filtration, oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29433547 Gassy Ozone Effect on Quality Parameters of Flaxes Made from Biologically Activated Whole Wheat Grains
Authors: Tatjana Rakcejeva, Jelena Zagorska, Elina Zvezdina
Abstract:
The aim of the current research was to investigate the gassy ozone effect on quality parameters of flaxes made form whole biologically activated wheat grains. The research was accomplished on wheat grains variety
Keywords: Gassy ozone, flaxes, biologically activated grains, quality parameters, treatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18053546 Hybrid of Hunting Search and Modified Simplex Methods for Grease Position Parameter Design Optimisation
Authors: P. Luangpaiboon, S. Boonhao
Abstract:
This study proposes a multi-response surface optimization problem (MRSOP) for determining the proper choices of a process parameter design (PPD) decision problem in a noisy environment of a grease position process in an electronic industry. The proposed models attempts to maximize dual process responses on the mean of parts between failure on left and right processes. The conventional modified simplex method and its hybridization of the stochastic operator from the hunting search algorithm are applied to determine the proper levels of controllable design parameters affecting the quality performances. A numerical example demonstrates the feasibility of applying the proposed model to the PPD problem via two iterative methods. Its advantages are also discussed. Numerical results demonstrate that the hybridization is superior to the use of the conventional method. In this study, the mean of parts between failure on left and right lines improve by 39.51%, approximately. All experimental data presented in this research have been normalized to disguise actual performance measures as raw data are considered to be confidential.Keywords: Grease Position Process, Multi-response Surfaces, Modified Simplex Method, Hunting Search Method, Desirability Function Approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16883545 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches
Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez
Abstract:
Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.
Keywords: Structural reliability, reinforced concrete bridges, mixing approaches, point estimate method, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14143544 Wear Regimes of Al-Cu-Mg Matrix Composites
Authors: R. N. Rao, S. L. Tulasi Devi
Abstract:
Tribological behavior and wear regimes of ascast and heattreted Al-Cu-Mg matrix composites containing SiC particles were studied using a pin-on-disc wear testing apparatus against an EN32 steel counterface giving emphasis on wear rate as a function of applied pressures (0.2, 0.6, 1.0 and 1.4 MPa) at different sliding distances (1000, 2000, 3000, 4000 and 5000 meters) and at a fixed sliding speed of 3.35m/s. The results showed that the composite exhibited lower wear rate than that of the matrix alloy and the wear rate of the composites is noted to be invariant to the sliding distance and is reducing by heat treatment. Wear regimes such as low, mild and severe wear were observed as per the Archard-s wear calculations. It is very interesting to note that the mild wear is almost constant in all the wear regimes.Keywords: Aluminum, matrix, regimes, wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20883543 Generating State-Based Testing Models for Object-Oriented Framework Interface Classes
Authors: Jehad Al Dallal, Paul Sorenson
Abstract:
An application framework provides a reusable design and implementation for a family of software systems. Application developers extend the framework to build their particular applications using hooks. Hooks are the places identified to show how to use and customize the framework. Hooks define the Framework Interface Classes (FICs) and the specifications of their methods. As part of the development life cycle, it is required to test the implementations of the FICs. Building a testing model to express the behavior of a class is an essential step for the generation of the class-based test cases. The testing model has to be consistent with the specifications provided for the hooks. State-based models consisting of states and transitions are testing models well suited to objectoriented software. Typically, hand-construction of a state-based model of a class behavior is expensive, error-prone, and may result in constructing an inconsistent model with the specifications of the class methods, which misleads verification results. In this paper, a technique is introduced to automatically synthesize a state-based testing model for FICs using the specifications provided for the hooks. A tool that supports the proposed technique is introduced.Keywords: Framework interface classes, hooks, state-basedtesting, testing model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12273542 Basic Research on Applying Temporary Work Engineering at the Design Phase
Authors: Jin Woong Lee, Kyuman Cho, Taehoon Kim
Abstract:
The application of constructability is increasingly required not only in the construction phase but also in the whole project stage. In particular, the proper application of construction experience and knowledge during the design phase enables the minimization of inefficiencies such as design changes and improvements in constructability during the construction phase. In order to apply knowledge effectively, engineering technology efforts should be implemented with design progress. Among many engineering technologies, engineering for temporary works, including facilities, equipment, and other related construction methods, is important to improve constructability. Therefore, as basic research, this study investigates the applicability of temporary work engineering during the design phase in the building construction industry. As a result, application of temporary work engineering has a greater impact on construction cost reduction and constructability improvement. In contrast to the existing design-bid-build method, the turn-key and CM (construct management) procurement methods currently being implemented in Korea are expected to have a significant impact on the direction of temporary work engineering. To introduce temporary work engineering, expert/professional organization training is first required, and a lack of client awareness should be preferentially improved. The results of this study are expected to be useful as reference material for the development of more effective temporary work engineering tasks and work processes in the future.
Keywords: Temporary work engineering, design phase, constructability, building construction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 974