Search results for: Wavelet feature extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1738

Search results for: Wavelet feature extraction

58 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry

Authors: Nadia Belu, Laurentiu M. Ionescu, Agnieszka Misztal

Abstract:

In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.

Keywords: Automotive industry, control plan, FMEA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2835
57 Nanomaterial Based Electrochemical Sensors for Endocrine Disrupting Compounds

Authors: Gaurav Bhanjana, Ganga Ram Chaudhary, Sandeep Kumar, Neeraj Dilbaghi

Abstract:

Main sources of endocrine disrupting compounds in the ecosystem are hormones, pesticides, phthalates, flame retardants, dioxins, personal-care products, coplanar polychlorinated biphenyls (PCBs), bisphenol A, and parabens. These endocrine disrupting compounds are responsible for learning disabilities, brain development problems, deformations of the body, cancer, reproductive abnormalities in females and decreased sperm count in human males. Although discharge of these chemical compounds into the environment cannot be stopped, yet their amount can be retarded through proper evaluation and detection techniques. The available techniques for determination of these endocrine disrupting compounds mainly include high performance liquid chromatography (HPLC), mass spectroscopy (MS) and gas chromatography-mass spectrometry (GC–MS). These techniques are accurate and reliable but have certain limitations like need of skilled personnel, time consuming, interference and requirement of pretreatment steps. Moreover, these techniques are laboratory bound and sample is required in large amount for analysis. In view of above facts, new methods for detection of endocrine disrupting compounds should be devised that promise high specificity, ultra sensitivity, cost effective, efficient and easy-to-operate procedure. Nowadays, electrochemical sensors/biosensors modified with nanomaterials are gaining high attention among researchers. Bioelement present in this system makes the developed sensors selective towards analyte of interest. Nanomaterials provide large surface area, high electron communication feature, enhanced catalytic activity and possibilities of chemical modifications. In most of the cases, nanomaterials also serve as an electron mediator or electrocatalyst for some analytes.

Keywords: Sensors, endocrine disruptors, nanoparticles, electrochemical, microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
56 Validation on 3D Surface Roughness Algorithm for Measuring Roughness of Psoriasis Lesion

Authors: M.H. Ahmad Fadzil, Esa Prakasa, Hurriyatul Fitriyah, Hermawan Nugroho, Azura Mohd Affandi, S.H. Hussein

Abstract:

Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.

Keywords: psoriasis, roughness algorithm, polynomial surfacefitting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2450
55 Operational Analysis of Urban Intelligent Transportation System and Strategies for Future Development - Taking Calling Service of Taxi in Wuhan as an Example

Authors: Wang Xu, Yao Yangyang, Lin Ying, Wang Zhenzhen

Abstract:

Intelligent Transportation System integrates various modern advanced technologies into the ground transportation system, and it will be the goal of urban transport system in the future because of its comprehensive effects. However, it also brings some problems, such as project performance assessment, fairness of benefiting groups, fund management, which are directly related to its operation and implementation. Wuhan has difficulties in organizing transportation because of its nature feature (river and lake), therefore, calling Service of Taxi plays an important role in transportation. This paper researches on calling Service of Taxi in Wuhan, based on quantitative and qualitative analysis. It analyzes its operations management systematically, including business model, finance, usage analysis and users evaluation. As for business model, it is that the government leads the operation at the initial stage, and the third part dominates the operation at the mature stage, which not only eases the pressure of the third part and benefits the spread of the calling service at the initial stage, but also alleviates financial pressure of government and improve the efficiency of the operation at the mature stage. As for finance, it draws that this service will bring heavy financial burden of equipments, but it will be alleviated in the future because of its spread. As for usage analysis, through data comparison, this service can bring some benefits for taxi drivers, and time and spatial distribution of usage have certain features. As for user evaluation, it analyzes using group and the reason why choosing it. At last, according to the analysis above, the paper puts forward the potentials, limitations, and future development strategies for it.

Keywords: Assessment, Calling service of taxi, Operations management, Strategies, Using groups.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2186
54 Water Security in Rural Areas through Solar Energy in Baja California Sur, Mexico

Authors: Luis F. Beltrán-Morales, Dalia Bali Cohen, Enrique Troyo-Diéguez, Gerzaín Avilés Polanco, Victor Sevilla Unda

Abstract:

This study aims to assess the potential of solar energy technology for improving access to water and hence the livelihood strategies of rural communities in Baja California Sur, Mexico. It focuses on livestock ranches and photovoltaic water-pumptechnology as well as other water extraction methods. The methodology used are the Sustainable Livelihoods and the Appropriate Technology approaches. A household survey was applied in June of 2006 to 32 ranches in the municipality, of which 22 used PV pumps; and semi-structured interviews were conducted. Findings indicate that solar pumps have in fact helped people improve their quality of life by allowing them to pursue a different livelihood strategy and that improved access to water -not necessarily as more water but as less effort to extract and collect it- does not automatically imply overexploitation of the resource; consumption is based on basic needs as well as on storage and pumping capacity. Justification for such systems lies in the avoidance of logistical problems associated to fossil fuels, PV pumps proved to be the most beneficial when substituting gasoline or diesel equipment but of dubious advantage if intended to replace wind or gravity systems. Solar water pumping technology-s main obstacle to dissemination are high investment and repairs costs and it is therefore not suitable for all cases even when insolation rates and water availability are adequate. In cases where affordability is not an obstacle it has become an important asset that contributes –by means of reduced expenses, less effort and saved time- to the improvement of livestock, the main livelihood provider for these ranches.

Keywords: Solar Pumps, Water Security, Livestock Ranches, Sustainable Livelihoods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1543
53 Exploration of an Environmentally Friendly Form of City Development Combined with a River: An Example of a Four-Dimensional Analysis Based on the Expansion of the City of Jinan across the Yellow River

Authors: Zhaocheng Shang

Abstract:

In order to study the topic of cities crossing rivers, a Four-Dimensional Analysis Method consisting of timeline, X-axis, Y-axis, and Z-axis is proposed. Policies, plans, and their implications are summarized and researched along with the timeline. The X-axis is the direction which is parallel to the river. The research area was chosen because of its important connection function. It is proposed that more surface water network should be built because of the ecological orientation of the research area. And the analysis of groundwater makes it for sure that the proposal is feasible. After the blue water network is settled, the green landscape network which is surrounded by it could be planned. The direction which is transversal to the river (Y-axis) should run through the transportation axis so that the urban texture could stretch in an ecological way. Therefore, it is suggested that the work of the planning bureau and river bureau should be coordinated. The Z-axis research is on the section view of the river, especially on the Yellow River’s special feature of being a perched river. Based on water control safety demands, river parks could be constructed on the embankment buffer zone, whereas many kinds of ornamental trees could be used to build the buffer zone. City Crossing River is a typical case where we make use of landscaping to build a symbiotic relationship between the urban landscape architecture and the environment. The local environment should be respected in the process of city expansion. The planning order of "Benefit- Flood Control Safety" should be replaced by "Flood Control Safety - Landscape Architecture- People - Benefit".

Keywords: Blue-Green landscape network, city crossing river, four-dimensional analysis method, planning order.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 638
52 Physicochemical Properties of Microemulsions and their uses in Enhanced Oil Recovery

Authors: T. Kumar, Achinta Bera, Ajay Mandal

Abstract:

Use of microemulsion in enhanced oil recovery has become more attractive in recent years because of its high level of extraction efficiency. Experimental investigations have been made on characterization of microemulsions of oil-brinesurfactant/ cosurfactant system for its use in enhanced oil recovery (EOR). Sodium dodecyl sulfate, propan-1-ol and heptane were selected as surfactant, cosurfactant and oil respectively for preparation of microemulsion. The effects of salinity on the relative phase volumes and solubilization parameters have also been studied. As salinity changes from low to high value, phase transition takes place from Winsor I to Winsor II via Winsor III. Suitable microemulsion composition has been selected based on its stability and ability to reduce interfacial tension. A series of flooding experiments have been performed using the selected microemulsion. The flooding experiments were performed in a core flooding apparatus using uniform sand pack. The core holder was tightly packed with uniform sands (60-100 mesh) and saturated with brines of different salinities. It was flooded with the brine at 25 psig and the absolute permeability was calculated from the flow rate of the through sand pack. The sand pack was then flooded with the crude oil at 800 psig to irreducible water saturation. The initial water saturation was determined on the basis of mass balance. Waterflooding was conducted by placing the coreholder horizontally at a constant injection pressure at 200 pisg. After water flooding, when water-cut reached above 95%, around 0.5 pore volume (PV) of the above microemulsion slug was injected followed by chasing water. The experiments were repeated using different composition of microemulsion slug. The additional recoveries were calculated by material balance. Encouraging results with additional recovery more than 20% of original oil in place above the conventional water flooding have been observed.

Keywords: Microemulsion Flooding, Enhanced Oil Recovery, Phase Behavior, Optimal salinity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3216
51 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: Time-series, features engineering methods for forecasting, energy demand forecasting, Azure machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1241
50 LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc

Authors: Víctor D. Rodríguez, Edith P. Estupiñán, Juan C. Martínez

Abstract:

The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.

Keywords: BLER, LTE, Network, Qualipoc, SNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 489
49 High Securing Cover-File of Hidden Data Using Statistical Technique and AES Encryption Algorithm

Authors: A. A. Zaidan, Anas Majeed, B. B. Zaidan

Abstract:

Nowadays, the rapid development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threatens. It-s a big security and privacy issue with the large flood of information and the development of the digital format, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. Nowadays protection system classified with more specific as hiding information, encryption information, and combination between hiding and encryption to increase information security, the strength of the information hiding science is due to the non-existence of standard algorithms to be used in hiding secret messages. Also there is randomness in hiding methods such as combining several media (covers) with different methods to pass a secret message. In addition, there are no formal methods to be followed to discover the hidden data. For this reason, the task of this research becomes difficult. In this paper, a new system of information hiding is presented. The proposed system aim to hidden information (data file) in any execution file (EXE) and to detect the hidden file and we will see implementation of steganography system which embeds information in an execution file. (EXE) files have been investigated. The system tries to find a solution to the size of the cover file and making it undetectable by anti-virus software. The system includes two main functions; first is the hiding of the information in a Portable Executable File (EXE), through the execution of four process (specify the cover file, specify the information file, encryption of the information, and hiding the information) and the second function is the extraction of the hiding information through three process (specify the steno file, extract the information, and decryption of the information). The system has achieved the main goals, such as make the relation of the size of the cover file and the size of information independent and the result file does not make any conflict with anti-virus software.

Keywords: Cryptography, Steganography, Portable ExecutableFile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760
48 Text Mining Technique for Data Mining Application

Authors: M. Govindarajan

Abstract:

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In decision tree approach is most useful in classification problem. With this technique, tree is constructed to model the classification process. There are two basic steps in the technique: building the tree and applying the tree to the database. This paper describes a proposed C5.0 classifier that performs rulesets, cross validation and boosting for original C5.0 in order to reduce the optimization of error ratio. The feasibility and the benefits of the proposed approach are demonstrated by means of medial data set like hypothyroid. It is shown that, the performance of a classifier on the training cases from which it was constructed gives a poor estimate by sampling or using a separate test file, either way, the classifier is evaluated on cases that were not used to build and evaluate the classifier are both are large. If the cases in hypothyroid.data and hypothyroid.test were to be shuffled and divided into a new 2772 case training set and a 1000 case test set, C5.0 might construct a different classifier with a lower or higher error rate on the test cases. An important feature of see5 is its ability to classifiers called rulesets. The ruleset has an error rate 0.5 % on the test cases. The standard errors of the means provide an estimate of the variability of results. One way to get a more reliable estimate of predictive is by f-fold –cross- validation. The error rate of a classifier produced from all the cases is estimated as the ratio of the total number of errors on the hold-out cases to the total number of cases. The Boost option with x trials instructs See5 to construct up to x classifiers in this manner. Trials over numerous datasets, large and small, show that on average 10-classifier boosting reduces the error rate for test cases by about 25%.

Keywords: C5.0, Error Ratio, text mining, training data, test data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2437
47 Analysis of Rural Roads in Developing Countries Using Principal Component Analysis and Simple Average Technique in the Development of a Road Safety Performance Index

Authors: Muhammad Tufail, Jawad Hussain, Hammad Hussain, Imran Hafeez, Naveed Ahmad

Abstract:

Road safety performance index is a composite index which combines various indicators of road safety into single number. Development of a road safety performance index using appropriate safety performance indicators is essential to enhance road safety. However, a road safety performance index in developing countries has not been given as much priority as needed. The primary objective of this research is to develop a general Road Safety Performance Index (RSPI) for developing countries based on the facility as well as behavior of road user. The secondary objectives include finding the critical inputs in the RSPI and finding the better method of making the index. In this study, the RSPI is developed by selecting four main safety performance indicators i.e., protective system (seat belt, helmet etc.), road (road width, signalized intersections, number of lanes, speed limit), number of pedestrians, and number of vehicles. Data on these four safety performance indicators were collected using observation survey on a 20 km road section of the National Highway N-125 road Taxila, Pakistan. For the development of this composite index, two methods are used: a) Principal Component Analysis (PCA) and b) Equal Weighting (EW) method. PCA is used for extraction, weighting, and linear aggregation of indicators to obtain a single value. An individual index score was calculated for each road section by multiplication of weights and standardized values of each safety performance indicator. However, Simple Average technique was used for weighting and linear aggregation of indicators to develop a RSPI. The road sections are ranked according to RSPI scores using both methods. The two weighting methods are compared, and the PCA method is found to be much more reliable than the Simple Average Technique.

Keywords: Aggregation, index score, indicators, principal component analysis, weighting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509
46 Dynamic Behavior of the Nanostructure of Load-bearing Biological Materials

Authors: M. Qwamizadeh, K. Zhou, Z. Zhang, YW. Zhang

Abstract:

Typical load-bearing biological materials like bone, mineralized tendon and shell, are biocomposites made from both organic (collagen) and inorganic (biomineral) materials. This amazing class of materials with intrinsic internally designed hierarchical structures show superior mechanical properties with regard to their weak components from which they are formed. Extensive investigations concentrating on static loading conditions have been done to study the biological materials failure. However, most of the damage and failure mechanisms in load-bearing biological materials will occur whenever their structures are exposed to dynamic loading conditions. The main question needed to be answered here is: What is the relation between the layout and architecture of the load-bearing biological materials and their dynamic behavior? In this work, a staggered model has been developed based on the structure of natural materials at nanoscale and Finite Element Analysis (FEA) has been used to study the dynamic behavior of the structure of load-bearing biological materials to answer why the staggered arrangement has been selected by nature to make the nanocomposite structure of most of the biological materials. The results showed that the staggered structures will efficiently attenuate the stress wave rather than the layered structure. Furthermore, such staggered architecture is effectively in charge of utilizing the capacity of the biostructure to resist both normal and shear loads. In this work, the geometrical parameters of the model like the thickness and aspect ratio of the mineral inclusions selected from the typical range of the experimentally observed feature sizes and layout dimensions of the biological materials such as bone and mineralized tendon. Furthermore, the numerical results validated with existing theoretical solutions. Findings of the present work emphasize on the significant effects of dynamic behavior on the natural evolution of load-bearing biological materials and can help scientists to design bioinspired materials in the laboratories.

Keywords: Load-bearing biological materials, nanostructure, staggered structure, stress wave decay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2034
45 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network

Authors: Abdulaziz Alsadhan, Naveed Khan

Abstract:

In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion detection system (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw dataset for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle component analysis (PCA), Linear Discriminant Analysis (LDA) and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. This optimal feature subset is used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) are used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.

Keywords: Particle Swarm Optimization (PSO), Principle component analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2708
44 Named Entity Recognition using Support Vector Machine: A Language Independent Approach

Authors: Asif Ekbal, Sivaji Bandyopadhyay

Abstract:

Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.

Keywords: Named Entity (NE), Named Entity Recognition (NER), Support Vector Machine (SVM), Bengali, Hindi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3326
43 Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding

Authors: K. Anitha Sheela, J. Tarun Kumar

Abstract:

HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.

Keywords: AMC, HSDPA, LDPC, WCDMA, 3GPP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011
42 Palestine Smart Tourism Augmented Reality Mobile Application

Authors: Murad Al-Rajab, Sherin Hazboun, Azhar Al-Hamamreh, Nirmeen Odeh, Siham Halaseh

Abstract:

Tourism is considered an important sector for most countries, while maintaining good tourism attractions can promote national economic development. The State of Palestine is historically considered a wealthy country full of many archaeological places. In the city of Bethlehem, for example, the Church of the Nativity is the most important touristic site, but it does not have enough technology development to attract tourists. In this paper, we propose a smart mobile application named “Pal-STAR” (Palestine Smart Tourist Augmented Reality) as an innovative solution which targets tourists and assists them to make a visit inside the Church of the Nativity. The application will use augmented reality and feature a virtual tourist guide showing views of the church while providing historical information in a smart, easy, effective and user-friendly way. The proposed application is compatible with multiple mobile platforms and is considered user friendly. The findings show that this application will improve the practice of the tourism sector in the Holy Land, it will also increase the number of tourists visiting the Church of the Nativity and it will facilitate access to historical data that have been difficult to obtain using traditional tourism guidance. The value that tourism adds to a country cannot be denied, and the more technological advances are incorporated in this sector, the better the country’s tourism sector can be served. Palestine’s economy is heavily dependent on tourism in many of its main cities, despite several limitations, and technological development is needed to enable this sector to flourish. The proposed mobile application would definitely have a good impact on the development of the tourism sector by creating an Augmented Reality environment for tourists inside the church, helping them to navigate and learn about holy places in a non-traditional way, using a virtual tourist guide.

Keywords: Smartphones, tourism, tourists guide, augmented reality, Palestine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 532
41 Two-Level Identification of HVAC Consumers for Demand Response Potential Estimation Based on Setpoint Change

Authors: M. Naserian, M. Jooshaki, M. Fotuhi-Firuzabad, M. Hossein Mohammadi Sanjani, A. Oraee

Abstract:

In recent years, the development of communication infrastructure and smart meters have facilitated the utilization of demand-side resources which can enhance stability and economic efficiency of power systems. Direct load control programs can play an important role in the utilization of demand-side resources in the residential sector. However, investments required for installing control equipment can be a limiting factor in the development of such demand response programs. Thus, selection of consumers with higher potentials is crucial to the success of a direct load control program. Heating, ventilation, and air conditioning (HVAC) systems, which due to the heat capacity of buildings feature relatively high flexibility, make up a major part of household consumption. Considering that the consumption of HVAC systems depends highly on the ambient temperature and bearing in mind the high investments required for control systems enabling direct load control demand response programs, in this paper, a solution is presented to uncover consumers with high air conditioner demand among a large number of consumers and to measure the demand response potential of such consumers. This can pave the way for estimating the investments needed for the implementation of direct load control programs for residential HVAC systems and for estimating the demand response potentials in a distribution system. In doing so, we first cluster consumers into several groups based on the correlation coefficients between hourly consumption data and hourly temperature data using K-means algorithm. Then, by applying a recent algorithm to the hourly consumption and temperature data, consumers with high air conditioner consumption are identified. Finally, demand response potential of such consumers is estimated based on the equivalent desired temperature setpoint changes.

Keywords: Data-driven analysis, demand response, direct load control, HVAC system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 158
40 An Intelligent Combined Method Based on Power Spectral Density, Decision Trees and Fuzzy Logic for Hydraulic Pumps Fault Diagnosis

Authors: Kaveh Mollazade, Hojat Ahmadi, Mahmoud Omid, Reza Alimardani

Abstract:

Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. The aim of this work is to investigate the effectiveness of a new fault diagnosis method based on power spectral density (PSD) of vibration signals in combination with decision trees and fuzzy inference system (FIS). To this end, a series of studies was conducted on an external gear hydraulic pump. After a test under normal condition, a number of different machine defect conditions were introduced for three working levels of pump speed (1000, 1500, and 2000 rpm), corresponding to (i) Journal-bearing with inner face wear (BIFW), (ii) Gear with tooth face wear (GTFW), and (iii) Journal-bearing with inner face wear plus Gear with tooth face wear (B&GW). The features of PSD values of vibration signal were extracted using descriptive statistical parameters. J48 algorithm is used as a feature selection procedure to select pertinent features from data set. The output of J48 algorithm was employed to produce the crisp if-then rule and membership function sets. The structure of FIS classifier was then defined based on the crisp sets. In order to evaluate the proposed PSD-J48-FIS model, the data sets obtained from vibration signals of the pump were used. Results showed that the total classification accuracy for 1000, 1500, and 2000 rpm conditions were 96.42%, 100%, and 96.42% respectively. The results indicate that the combined PSD-J48-FIS model has the potential for fault diagnosis of hydraulic pumps.

Keywords: Power Spectral Density, Machine ConditionMonitoring, Hydraulic Pump, Fuzzy Logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2638
39 Forensic Medical Capacities of Research of Saliva Stains on Physical Evidence after Washing

Authors: Saule Mussabekova

Abstract:

Recent advances in genetics have allowed increasing acutely the capacities of the formation of reliable evidence in conducting forensic examinations. Thus, traces of biological origin are important sources of information about a crime. Currently, around the world, sexual offenses have increased, and among them are those in which the criminals use various detergents to remove traces of their crime. A feature of modern synthetic detergents is the presence of biological additives - enzymes. Enzymes purposefully destroy stains of biological origin. To study the nature and extent of the impact of modern washing powders on saliva stains on the physical evidence, specially prepared test specimens of different types of tissues to which saliva was applied have been examined. Materials and Methods: Washing machines of famous manufacturers of household appliances have been used with different production characteristics and advertised brands of washing powder for test washing. Over 3,500 experimental samples were tested. After washing, the traces of saliva were identified using modern research methods of forensic medicine. Results: The influence was tested and the dependence of the use of different washing programs, types of washing machines and washing powders in the process of establishing saliva trace and identify of the stains on the physical evidence while washing was revealed. The results of experimental and practical expert studies have shown that in most cases it is not possible to draw the conclusions in the identification of saliva traces on physical evidence after washing. This is a consequence of the effect of biological additives and other additional factors on traces of saliva during washing. Conclusions: On the basis of the results of the study, the feasibility of saliva traces of the stains on physical evidence after washing is established. The use of modern molecular genetic methods makes it possible to partially solve the problems arising in the study of unlaundered evidence. Additional study of physical evidence after washing facilitates detection and investigation of sexual offenses against women and children.

Keywords: Saliva research, modern synthetic detergents, laundry detergents, forensic medicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1257
38 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses

Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh

Abstract:

Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotiv EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.

Keywords: Brain Computer Interface (BCI), Electroencephalogram (EEG), EEGLab, BCILab, Emotiv, Emotions, Interval features, Spectral features, Artificial Neural Network, Control applications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5258
37 “Post-Industrial” Journalism as a Creative Industry

Authors: Lynette Sheridan Burns, Benjamin J. Matthews

Abstract:

The context of post-industrial journalism is one in which the material circumstances of mechanical publication have been displaced by digital technologies, increasing the distance between the orthodoxy of the newsroom and the culture of journalistic writing. Content is, with growing frequency, created for delivery via the internet, publication on web-based ‘platforms’ and consumption on screen media. In this environment, the question is not ‘who is a journalist?’ but ‘what is journalism?’ today. The changes bring into sharp relief new distinctions between journalistic work and journalistic labor, providing a key insight into the current transition between the industrial journalism of the 20th century, and the post-industrial journalism of the present. In the 20th century, the work of journalists and journalistic labor went hand-in-hand as most journalists were employees of news organizations, whilst in the 21st century evidence of a decoupling of ‘acts of journalism’ (work) and journalistic employment (labor) is beginning to appear. This 'decoupling' of the work and labor that underpins journalism practice is far reaching in its implications, not least for institutional structures. Under these conditions we are witnessing the emergence of expanded ‘entrepreneurial’ journalism, based on smaller, more independent and agile - if less stable - enterprise constructs that are a feature of creative industries. Entrepreneurial journalism is realized in a range of organizational forms from social enterprise, through to profit driven start-ups and hybrids of the two. In all instances, however, the primary motif of the organization is an ideological definition of journalism. An example is the Scoop Foundation for Public Interest Journalism in New Zealand, which owns and operates Scoop Publishing Limited, a not for profit company and social enterprise that publishes an independent news site that claims to have over 500,000 monthly users. Our paper demonstrates that this journalistic work meets the ideological definition of journalism; conducted within the creative industries using an innovative organizational structure that offers a new, viable post-industrial future for journalism.

Keywords: Creative industries, digital communication, journalism, post-industrial.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1829
36 Energy Saving Suction Hood

Authors: I.Daut, N. Gomesh, M. Irwanto, Y. M. Irwan

Abstract:

Public awareness towards green energy are on the rise and this can be prove by many product being manufactured or prerequired to be made as energy saving devices mainly to save consumer from spending more on utility billing. These schemes are popular nowadays and many homemade appliances are turned into energy saving gadget which attracts the attention of consumers. Knowing the public demands and pattern towards purchasing home appliances thus the idea of “energy saving suction hood (ESSH)" is proposed. The ESSH can be used in many places that require smoke ventilation or even to reduce the room temperature as many conventional suction hoods (CSH) do, but this device works automatically by the usage of sensors that detects the smoke/temperature and automatically spins the exhaust fan. As it turns, the mechanical rotation rotates the AC generator which is coupled together with the fan and then charges the battery. The innovation of this product is, it does not rely on the utility supply as it is also hook up with a solar panel which also charges the battery, Secondly, it generates energy as the exhaust fan mechanically rotates. Thirdly, an energy loop back feature is introduced to this system which will supply for the ventilator fan. Another major innovation is towards interfacing this device with an in house production of generator. This generator is produced by proper design on stator as well as rotor to reduce the losses. A comparison is made between the ESSH and the CSH and result shows that the ESSH saves 172.8kWh/year of utility supply which is used by CSH. This amount of energy can save RM 3.14 from monthly utility bill and a total of RM 37.67 per year. In fact this product can generate 175 Watt of power from generator(75W) and solar panel(100W) that can be used either to supply other household appliances and/or to loop back to supply the fans motor. The innovation of this system is essential for future production of other equipment by using the loopback power method and turning most equipment into a standalone system.

Keywords: Energy saving suction hood (ESSH), conventional suction hoods (CSH), energy, and power

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1848
35 Production Process for Diesel Fuel Components Polyoxymethylene Dimethyl Ethers from Methanol and Formaldehyde Solution

Authors: Xiangjun Li, Huaiyuan Tian, Wujie Zhang, Dianhua Liu

Abstract:

Polyoxymethylene dimethyl ethers (PODEn) as clean diesel additive can improve the combustion efficiency and quality of diesel fuel and alleviate the problem of atmospheric pollution. Considering synthetic routes, PODE production from methanol and formaldehyde is regarded as the most economical and promising synthetic route. However, methanol used for synthesizing PODE can produce water, which causes the loss of active center of catalyst and hydrolysis of PODEn in the production process. Macroporous strong acidic cation exchange resin catalyst was prepared, which has comparative advantages over other common solid acid catalysts in terms of stability and catalytic efficiency for synthesizing PODE. Catalytic reactions were carried out under 353 K, 1 MPa and 3mL·gcat-1·h-1 in a fixed bed reactor. Methanol conversion and PODE3-6 selectivity reached 49.91% and 23.43%, respectively. Catalyst lifetime evaluation showed that resin catalyst retained its catalytic activity for 20 days without significant changes and catalytic activity of completely deactivated resin catalyst can basically return to previous level by simple acid regeneration. The acid exchange capacities of original and deactivated catalyst were 2.5191 and 0.0979 mmol·g-1, respectively, while regenerated catalyst reached 2.0430 mmol·g-1, indicating that the main reason for resin catalyst deactivation is that Brønsted acid sites of original resin catalyst were temporarily replaced by non-hydrogen ion cations. A separation process consisting of extraction and distillation for PODE3-6 product was designed for separation of water and unreacted formaldehyde from reactive mixture and purification of PODE3-6, respectively. The concentration of PODE3-6 in final product can reach up to 97%. These results indicate that the scale-up production of PODE3-6 from methanol and formaldehyde solution is feasible.

Keywords: Inactivation, polyoxymethylene dimethyl ethers, separation process, sulfonic cation exchange resin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 845
34 Milling Simulations with a 3-DOF Flexible Planar Robot

Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden

Abstract:

Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.

Keywords: Control, machining, multibody, robotic, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1326
33 Financing - Scheduling Optimization for Construction Projects by using Genetic Algorithms

Authors: Hesham Abdel-Khalek, Sherif M. Hafez, Abdel-Hamid M. el-Lakany, Yasser Abuel-Magd

Abstract:

Investment in a constructed facility represents a cost in the short term that returns benefits only over the long term use of the facility. Thus, the costs occur earlier than the benefits, and the owners of facilities must obtain the capital resources to finance the costs of construction. A project cannot proceed without an adequate financing, and the cost of providing an adequate financing can be quite large. For these reasons, the attention to the project finance is an important aspect of project management. Finance is also a concern to the other organizations involved in a project such as the general contractor and material suppliers. Unless an owner immediately and completely covers the costs incurred by each participant, these organizations face financing problems of their own. At a more general level, the project finance is the only one aspect of the general problem of corporate finance. If numerous projects are considered and financed together, then the net cash flow requirements constitute the corporate financing problem for capital investment. Whether project finance is performed at the project or at the corporate level does not alter the basic financing problem .In this paper, we will first consider facility financing from the owner's perspective, with due consideration for its interaction with other organizations involved in a project. Later, we discuss the problems of construction financing which are crucial to the profitability and solvency of construction contractors. The objective of this paper is to present the steps utilized to determine the best combination of minimum project financing. The proposed model considers financing; schedule and maximum net area .The proposed model is called Project Financing and Schedule Integration using Genetic Algorithms "PFSIGA". This model intended to determine more steps (maximum net area) for any project with a subproject. An illustrative example will demonstrate the feature of this technique. The model verification and testing are put into consideration.

Keywords: Project Management, Large-scale ConstructionProjects, Cash flow, Interest, Investment, Loan, Optimization, Scheduling, Financing and Genetic Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2172
32 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications

Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso

Abstract:

The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.

Keywords: Interferometry, MIMO RADAR, SAR, tomography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 854
31 Production and Purification of Monosaccharides by Hydrolysis of Sugar Cane Bagasse in an Ionic Liquid Medium

Authors: T. R. Bandara, H. Jaelani, G. J. Griffin

Abstract:

The conversion of lignocellulosic waste materials, such as sugar cane bagasse, to biofuels such as ethanol has attracted significant interest as a potential element for transforming transport fuel supplies to totally renewable sources. However, the refractory nature of the cellulosic structure of lignocellulosic materials has impeded progress on developing an economic process, whereby the cellulose component may be effectively broken down to glucose monosaccharides and then purified to allow downstream fermentation. Ionic liquid (IL) treatment of lignocellulosic biomass has been shown to disrupt the crystalline structure of cellulose thus potentially enabling the cellulose to be more readily hydrolysed to monosaccharides. Furthermore, conventional hydrolysis of lignocellulosic materials yields byproducts that are inhibitors for efficient fermentation of the monosaccharides. However, selective extraction of monosaccharides from an aqueous/IL phase into an organic phase utilizing a combination of boronic acids and quaternary amines has shown promise as a purification process. Hydrolysis of sugar cane bagasse immersed in an aqueous solution with IL (1-ethyl-3-methylimidazolium acetate) was conducted at different pH and temperature below 100 ºC. It was found that the use of a high concentration of hydrochloric acid to acidify the solution inhibited the hydrolysis of bagasse. At high pH (i.e. basic conditions), using sodium hydroxide, catalyst yields were reduced for total reducing sugars (TRS) due to the rapid degradation of the sugars formed. For purification trials, a supported liquid membrane (SLM) apparatus was constructed, whereby a synthetic solution containing xylose and glucose in an aqueous IL phase was transported across a membrane impregnated with phenyl boronic acid/Aliquat 336 to an aqueous phase. The transport rate of xylose was generally higher than that of glucose indicating that a SLM scheme may not only be useful for purifying sugars from undesirable toxic compounds, but also for fractionating sugars to improve fermentation efficiency.

Keywords: Biomass, bagasse, hydrolysis, monosaccharide, supported liquid membrane, purification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1305
30 Case-Based Reasoning Application to Predict Geological Features at Site C Dam Construction Project

Authors: S. Behnam Malekzadeh, I. Kerr, T. Kaempffer, T. Harper, A Watson

Abstract:

The Site C Hydroelectric dam is currently being constructed in north-eastern British Columbia on sub-horizontal sedimentary strata that dip approximately 15 meters from one bank of the Peace River to the other. More than 615 pressure sensors (Vibrating Wire Piezometers) have been installed on bedding planes (BPs) since construction began, with over 80 more planned before project completion. These pressure measurements are essential to monitor the stability of the rock foundation during and after construction and for dam safety purposes. BPs are identified by their clay gouge infilling, which varies in thickness from less than 1 to 20 mm and can be challenging to identify as the core drilling process often disturbs or washes away the gouge material. Without the use of depth predictions from nearby boreholes, stratigraphic markers, and downhole geophysical data, it is difficult to confidently identify BP targets for the sensors. In this paper, a Case-Based Reasoning (CBR) method was used to develop an empirical model called the Bedding Plane Elevation Prediction (BPEP) to help geologists and geotechnical engineers to predict geological features and BPs at new locations in a fast and accurate manner. To develop CBR, a database was developed based on 64 pressure sensors already installed on key bedding planes BP25, BP28, and BP31 on the Right Bank, including BP elevations and coordinates. 13 (20%) of the most recent cases were selected to validate and evaluate the accuracy of the developed model, while the similarity was defined as the distance between previous cases and recent cases to predict the depth of significant BPs. The average difference between actual BP elevations and predicted elevations for above BPs was ± 55 cm, while the actual results showed that 69% of predicted elevations were within ± 79 cm of actual BP elevations while 100% of predicted elevations for new cases were within ± 99 cm range. Eventually, the actual results will be used to develop the database and improve BPEP to perform as a learning machine to predict more accurate BP elevations for future sensor installations.

Keywords: Case-Based Reasoning, CBR, geological feature, geology, piezometer, pressure sensor, core logging, dam construction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 92
29 Lead-Free Inorganic Cesium Tin-Germanium Triiodide Perovskites for Photovoltaic Application

Authors: Seyedeh Mozhgan Seyed-Talebi, Javad Beheshtian

Abstract:

The toxicity of lead associated with the lifecycle of perovskite solar cells (PSCs( is a serious concern which may prove to be a major hurdle in the path toward their commercialization. The current proposed lead-free PSCs including Ag(I), Bi(III), Sb(III), Ti(IV), Ge(II), and Sn(II) low-toxicity cations are still plagued with the critical issues of poor stability and low efficiency. This is mainly because of their chemical stability. In the present research, utilization of all inorganic CsSnGeI3 based materials offers the advantages to enhance resistance of device to degradation, reduce the cost of cells, and minimize the carrier recombination. The presence of inorganic halide perovskite improves the photovoltaic parameters of PCSs via improved surface coverage and stability. The inverted structure of simulated devices using a 1D simulator like solar cell capacitance simulator (SCAPS) version 3308 involves TCOHTL/Perovskite/ETL/Au contact layer. PEDOT:PSS, PCBM, and CsSnGeI3 used as hole transporting layer (HTL), electron transporting layer (ETL), and perovskite absorber layer in the inverted structure for the first time. The holes are injected from highly stable and air tolerant Sn0.5Ge0.5I3 perovskite composition to HTM and electrons from the perovskite to ETL. Simulation results revealed a great dependence of power conversion efficiency (PCE) on the thickness and defect density of perovskite layer. Here the effect of an increase in operating temperature from 300 K to 400 K on the performance of CsSnGeI3 based perovskite devices is investigated. Comparison between simulated CsSnGeI3 based PCSs and similar real testified devices with spiro-OMeTAD as HTL showed that the extraction of carriers at the interfaces of perovskite absorber depends on the energy level mismatches between perovskite and HTL/ETL. We believe that optimization results reported here represent a critical avenue for fabricating the stable, low-cost, efficient, and eco-friendly all-inorganic Cs-Sn-Ge based lead-free perovskite devices.

Keywords: Hole transporting layer, lead-free, perovskite Solar cell, SCAPS-1D, Sn-Ge based material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 745