Search results for: Complementary common gate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1506

Search results for: Complementary common gate

306 Finite Element Study on Corono-Radicular Restored Premolars

Authors: Sandu L., Topală F., Porojan S.

Abstract:

Restoration of endodontically treated teeth is a common problem in dentistry, related to the fractures occurring in such teeth and to concentration of forces little information regarding variation of basic preparation guidelines in stress distribution has been available. To date, there is still no agreement in the literature about which material or technique can optimally restore endodontically treated teeth. The aim of the present study was to evaluate the influence of the core height and restoration materials on corono-radicular restored upper first premolar. The first step of the study was to achieve 3D models in order to analyze teeth, dowel and core restorations and overlying full ceramic crowns. The FEM model was obtained by importing the solid model into ANSYS finite element analysis software. An occlusal load of 100 N was conducted, and stresses occurring in the restorations, and teeth structures were calculated. Numerical simulations provide a biomechanical explanation for stress distribution in prosthetic restored teeth. Within the limitations of the present study, it was found that the core height has no important influence on the stress generated in coronoradicular restored premolars. It can be drawn that the cervical regions of the teeth and restorations were subjected to the highest stress concentrations.

Keywords: 3D models, finite element analysis, dowel and core restoration, full ceramic crown, premolars, structural simulations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2846
305 Shear Behaviour of RC Deep Beams with Openings Strengthened with Carbon Fiber Reinforced Polymer

Authors: Mannal Tariq

Abstract:

Construction industry is making progress at a high pace. The trend of the world is getting more biased towards the high rise buildings. Deep beams are one of the most common elements in modern construction having small span to depth ratio. Deep beams are mostly used as transfer girders. This experimental study consists of 16 reinforced concrete (RC) deep beams. These beams were divided into two groups; A and B. Groups A and B consist of eight beams each, having 381 mm (15 in) and 457 mm (18 in) depth respectively. Each group was further subdivided into four sub groups each consisting of two identical beams. Each subgroup was comprised of solid/control beam (without opening), opening above neutral axis (NA), at NA and below NA. Except for control beams, all beams with openings were strengthened with carbon fibre reinforced polymer (CFRP) vertical strips. These eight groups differ from each other based on depth and location of openings. For testing sake, all beams have been loaded with two symmetrical point loads. All beams have been designed based on strut and tie model concept. The outcome of experimental investigation elaborates the difference in the shear behaviour of deep beams based on depth and location of circular openings variation. 457 mm (18 in) deep beam with openings above NA show the highest strength and 381 mm (15 in) deep beam with openings below NA show the least strength. CFRP sheets played a vital role in increasing the shear capacity of beams.

Keywords: CFRP, deep beams, openings in deep beams, strut and tie model, shear behaviour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1309
304 Effect of Rollers Differential Speed and Paddy Moisture Content on Performance of Rubber Roll Husker

Authors: S. Firouzi, M.R. Alizadeh, S. Minaei

Abstract:

A study was carried out at the Rice Research Institute of Iran (RRII) to investigate the effect of rollers differential peripheral speed of commercial rubber roll husker and paddy moisture content on the husking index and percentage of broken rice. The experiment was conducted at six levels of rollers differential speed (1.5, 2.2, 2.9, 3.6, 4.3 and 5 m/s) and three levels of paddy moisture content (8-9, 10-11 and 12-13% w.b.). Two common paddy varieties namely, Binam and Khazer, were selected for this study. Results revealed that the effect of rollers differential speed and moisture content significantly (P<0.01) affected percentage of broken brown rice and paddy husking index. Average broken kernel percentage increased from 13 to 14.61% while husking index decreased from 71.64 to 61.81%, as paddy moisture content increased from 8-9 to 12-13%. It was observed that amount of broken rice decreased from 18.83 to 9.97%, when rollers differential speed varied from 1.5 to 5 m/s, while the husking index initially increased and then started to decrease. The mean value of husking index for Khazar variety (64.71%) was significantly lower than that for Binam variety (69.2%). It was concluded that rollers differential speed of 2.9 m/s and moisture content of 8-9% was the most appropriate combination for paddy husking of Binam and Khazar varieties in rubber roll husker.

Keywords: husking index, moisture content, paddy, rubber roll husker.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3246
303 Weighted Data Replication Strategy for Data Grid Considering Economic Approach

Authors: N. Mansouri, A. Asadi

Abstract:

Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.

Keywords: Data grid, data replication, simulation, replica selection, replica placement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2069
302 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment

Authors: U. Yerlikaya, R. T. Balkan

Abstract:

In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.

Keywords: A* Algorithm, autonomous turrets, high-dimensional C-Space, manifold C-Space, point clouds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 324
301 Investigation of Seismic T-Resisting Frame with Shear and Flexural Yield of Horizontal Plate Girders

Authors: Helia Barzegar Sedigh, Farzaneh Hamedi, Payam Ashtari

Abstract:

There are some limitations in common structural systems, such as providing appropriate lateral stiffness, adequate ductility, and architectural openings at the same time. Consequently, the concept of T-Resisting Frame (TRF) has been introduced to overcome all these deficiencies. The configuration of TRF in this study is a Vertical Plate Girder (VPG) which is placed within the span and two Horizontal Plate Girders (HPGs) connect VPG to side columns at each story level by the use of rigid connections. System performance is improved by utilizing rigid connections in side columns base joint. Shear yield of HPGs causes energy dissipation in TRF; therefore, high plastic deformation in web of HPGs and VPG affects the ductility of system. Moreover, in order to prevent shear buckling in web of TRF’s members and appropriate criteria for placement of web stiffeners are applied. In this paper, an experimental study is conducted by applying cyclic loading and using finite element models and numerical studies such as push over method are assessed on shear and flexural yielding of HPGs. As a result, seismic parameters indicate adequate lateral stiffness, and high ductility factor of 6.73, and HPGs’ shear yielding achieved as a proof of TRF’s better performance.

Keywords: Experimental study, finite element model, flexural and shear yielding, T-resisting frame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 685
300 Automatic Detection of Breast Tumors in Sonoelastographic Images Using DWT

Authors: A. Sindhuja, V. Sadasivam

Abstract:

Breast Cancer is the most common malignancy in women and the second leading cause of death for women all over the world. Earlier the detection of cancer, better the treatment. The diagnosis and treatment of the cancer rely on segmentation of Sonoelastographic images. Texture features has not considered for Sonoelastographic segmentation. Sonoelastographic images of 15 patients containing both benign and malignant tumorsare considered for experimentation.The images are enhanced to remove noise in order to improve contrast and emphasize tumor boundary. It is then decomposed into sub-bands using single level Daubechies wavelets varying from single co-efficient to six coefficients. The Grey Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP) features are extracted and then selected by ranking it using Sequential Floating Forward Selection (SFFS) technique from each sub-band. The resultant images undergo K-Means clustering and then few post-processing steps to remove the false spots. The tumor boundary is detected from the segmented image. It is proposed that Local Binary Pattern (LBP) from the vertical coefficients of Daubechies wavelet with two coefficients is best suited for segmentation of Sonoelastographic breast images among the wavelet members using one to six coefficients for decomposition. The results are also quantified with the help of an expert radiologist. The proposed work can be used for further diagnostic process to decide if the segmented tumor is benign or malignant.

Keywords: Breast Cancer, Segmentation, Sonoelastography, Tumor Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2166
299 Mobile Robot Control by Von Neumann Computer

Authors: E. V. Larkin, T. A. Akimenko, A. V. Bogomolov, A. N. Privalov

Abstract:

The digital control system of mobile robots (MR) control is considered. It is shown that sequential interpretation of control algorithm operators, unfolding in physical time, suggests the occurrence of time delays between inputting data from sensors and outputting data to actuators. Another destabilizing control factor is presence of backlash in the joints of an actuator with an executive unit. Complex model of control system, which takes into account the dynamics of the MR, the dynamics of the digital controller and backlash in actuators, is worked out. The digital controller model is divided into two parts: the first part describes the control law embedded in the controller in the form of a control program that realizes a polling procedure when organizing transactions to sensors and actuators. The second part of the model describes the time delays that occur in the Von Neumann-type controller when processing data. To estimate time intervals, the algorithm is represented in the form of an ergodic semi-Markov process. For an ergodic semi-Markov process of common form, a method is proposed for estimation a wandering time from one arbitrary state to another arbitrary state. Example shows how the backlash and time delays affect the quality characteristics of the MR control system functioning.

Keywords: Mobile robot, backlash, control algorithm, Von Neumann controller, semi-Markov process, time delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 294
298 E-Commerce Adoption and Implementation in Automobile Industry: A Case Study

Authors: Amitrajit Sarkar

Abstract:

The use of Electronic Commerce (EC) technologies enables Small Medium Enterprises (SMEs) to improve their efficiency and competitive position. Much of the literature proposes an extensive set of benefits for organizations that choose to adopt and implement ECommerce systems. Factors of Business –to-business (B2B) E-Commerce adoption and implementation have been extensively investigated. Despite enormous attention given to encourage Small Medium Enterprises (SMEs) to adopt and implement E-Commerce, little research has been carried out in identifying the factors of Business-to-Consumer ECommerce adoption and implementation for SMEs. To conduct the study, Tornatsky and Fleischer model was adopted and tested in four SMEs located in Christchurch, New Zealand. This paper explores the factors that impact the decision and method of adoption and implementation of ECommerce systems in automobile industry. Automobile industry was chosen because the product they deal with i.e. cars are not a common commodity to be sold online, despite this fact the eCommerce penetration in automobile industry is high. The factors that promote adoption and implementation of E-Commerce technologies are discussed, together with the barriers. This study will help SME owners to effectively handle the adoption and implementation process and will also improve the chance of successful E-Commerce implementation. The implications of the findings for managers, consultants, and government organizations engaged in promoting E-Commerce adoption and implementation in small businesses and future research are discussed.

Keywords: E-Commerce in SMEs, E-Commerce in automobile industry, B2C E-Commerce, E-Commerce adoption and Implementation, E-Commerce Website Implementation, E-Commerce Models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4741
297 DTC-SVM Scheme for Induction Motors Fedwith a Three-level Inverter

Authors: Ehsan Hassankhan, Davood A. Khaburi

Abstract:

Direct Torque Control is a control technique in AC drive systems to obtain high performance torque control. The conventional DTC drive contains a pair of hysteresis comparators. DTC drives utilizing hysteresis comparators suffer from high torque ripple and variable switching frequency. The most common solution to those problems is to use the space vector depends on the reference torque and flux. In this Paper The space vector modulation technique (SVPWM) is applied to 2 level inverter control in the proposed DTC-based induction motor drive system, thereby dramatically reducing the torque ripple. Then the controller based on space vector modulation is designed to be applied in the control of Induction Motor (IM) with a three-level Inverter. This type of Inverter has several advantages over the standard two-level VSI, such as a greater number of levels in the output voltage waveforms, Lower dV/dt, less harmonic distortion in voltage and current waveforms and lower switching frequencies. This paper proposes a general SVPWM algorithm for three-level based on standard two-level SVPWM. The proposed scheme is described clearly and simulation results are reported to demonstrate its effectiveness. The entire control scheme is implemented with Matlab/Simulink.

Keywords: Direct torque control, space vector Pulsewidthmodulation(SVPWM), neutral point clamped(NPC), two-levelinverter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4360
296 Using Ferry Access Points to Improve the Performance of Message Ferrying in Delay-Tolerant Networks

Authors: Farzana Yasmeen, Md. Nurul Huda, Md. Enamul Haque, Michihiro Aoki, Shigeki Yamada

Abstract:

Delay-Tolerant Networks (DTNs) are sparse, wireless networks where disconnections are common due to host mobility and low node density. The Message Ferrying (MF) scheme is a mobilityassisted paradigm to improve connectivity in DTN-like networks. A ferry or message ferry is a special node in the network which has a per-determined route in the deployed area and relays messages between mobile hosts (MHs) which are intermittently connected. Increased contact opportunities among mobile hosts and the ferry improve the performance of the network, both in terms of message delivery ratio and average end-end delay. However, due to the inherent mobility of mobile hosts and pre-determined periodicity of the message ferry, mobile hosts may often -miss- contact opportunities with a ferry. In this paper, we propose the combination of stationary ferry access points (FAPs) with MF routing to increase contact opportunities between mobile hosts and the MF and consequently improve the performance of the DTN. We also propose several placement models for deploying FAPs on MF routes. We evaluate the performance of the FAP placement models through comprehensive simulation. Our findings show that FAPs do improve the performance of MF-assisted DTNs and symmetric placement of FAPs outperforms other placement strategies.

Keywords: Service infrastructure, delay-tolerant network, messageferry routing, placement models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1941
295 A Comparative Analysis of Asymmetric Encryption Schemes on Android Messaging Service

Authors: Mabrouka Algherinai, Fatma Karkouri

Abstract:

Today, Short Message Service (SMS) is an important means of communication. SMS is not only used in informal environment for communication and transaction, but it is also used in formal environments such as institutions, organizations, companies, and business world as a tool for communication and transactions. Therefore, there is a need to secure the information that is being transmitted through this medium to ensure security of information both in transit and at rest. But, encryption has been identified as a means to provide security to SMS messages in transit and at rest. Several past researches have proposed and developed several encryption algorithms for SMS and Information Security. This research aims at comparing the performance of common Asymmetric encryption algorithms on SMS security. The research employs the use of three algorithms, namely RSA, McEliece, and RABIN. Several experiments were performed on SMS of various sizes on android mobile device. The experimental results show that each of the three techniques has different key generation, encryption, and decryption times. The efficiency of an algorithm is determined by the time that it takes for encryption, decryption, and key generation. The best algorithm can be chosen based on the least time required for encryption. The obtained results show the least time when McEliece size 4096 is used. RABIN size 4096 gives most time for encryption and so it is the least effective algorithm when considering encryption. Also, the research shows that McEliece size 2048 has the least time for key generation, and hence, it is the best algorithm as relating to key generation. The result of the algorithms also shows that RSA size 1024 is the most preferable algorithm in terms of decryption as it gives the least time for decryption.

Keywords: SMS, RSA, McEliece, RABIN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 627
294 Twitter Sentiment Analysis during the Lockdown on New Zealand

Authors: Smah Doeban Almotiri

Abstract:

One of the most common fields of natural language processing (NLP) is sentimental analysis. The inferred feeling in the text can be successfully mined for various events using sentiment analysis. Twitter is viewed as a reliable data point for sentimental analytics studies since people are using social media to receive and exchange different types of data on a broad scale during the COVID-19 epidemic. The processing of such data may aid in making critical decisions on how to keep the situation under control. The aim of this research is to look at how sentimental states differed in a single geographic region during the lockdown at two different times.1162 tweets were analyzed related to the COVID-19 pandemic lockdown using keywords hashtags (lockdown, COVID-19) for the first sample tweets were from March 23, 2020, until April 23, 2020, and the second sample for the following year was from March 1, 2021, until April 4, 2021. Natural language processing (NLP), which is a form of Artificial intelligent was used for this research to calculate the sentiment value of all of the tweets by using AFINN Lexicon sentiment analysis method. The findings revealed that the sentimental condition in both different times during the region's lockdown was positive in the samples of this study, which are unique to the specific geographical area of New Zealand. This research suggests applied machine learning sentimental method such as Crystal Feel and extended the size of the sample tweet by using multiple tweets over a longer period of time.

Keywords: sentiment analysis, Twitter analysis, lockdown, Covid-19, AFINN, NodeJS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 485
293 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance

Authors: Loai AbdAllah, Mahmoud Kaiyal

Abstract:

Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.

Keywords: Missing values, distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 736
292 Artificial Intelligence: A Comprehensive and Systematic Literature Review of Applications and Comparative Technologies

Authors: Z. M. Najmi

Abstract:

Over the years, the question around Artificial Intelligence has always been one with many answers. Whether by means of use in business and industry or complicated algorithmic programming, management of these technologies has always been the core focus. More recently, technologies have been questioned in industry and society alike as to whether they have improved human-centred design, assisted choices and objectives, and had a hand in systematic processes across the board. With these questions the answer may lie within AI technologies, and the steps needed in removing common human error. Elements such as Machine Learning, Deep Learning, Recommender Systems and Natural Language Processing will all be features to consider moving forward. Our previous intervention with AI applications has resulted in increased productivity, however, raised concerns for the continuation of traditional human-centred occupations. Emerging technologies such as Augmented Reality and Virtual Reality have all played a part in this during AI’s prominent rise. As mentioned, AI has been constantly under the microscope; the benefits and drawbacks may seem endless is wide, but AI is something we must take notice of and adapt into our everyday lives. The aim of this paper is to give an overview of the technologies surrounding A.I. and its’ related technologies. A comprehensive review has been written as a timeline of the developing events and key points in the history of Artificial Intelligence. This research is gathered entirely from secondary research, academic statements of knowledge and gathered to produce an understanding of the timeline of AI.

Keywords: Artificial Intelligence, Deep Learning, Augmented Reality, Reinforcement Learning, Machine Learning, Supervised Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 483
291 A Structural Constitutive Model for Viscoelastic Rheological Behavior of Human Saphenous Vein Using Experimental Assays

Authors: Rassoli Aisa, Abrishami Movahhed Arezu, Faturaee Nasser, Seddighi Amir Saeed, Shafigh Mohammad

Abstract:

Cardiovascular diseases are one of the most common causes of mortality in developed countries. Coronary artery abnormalities and carotid artery stenosis, also known as silent death, are among these diseases. One of the treatment methods for these diseases is to create a deviatory pathway to conduct blood into the heart through a bypass surgery. The saphenous vein is usually used in this surgery to create the deviatory pathway. Unfortunately, a re-surgery will be necessary after some years due to ignoring the disagreement of mechanical properties of graft tissue and/or applied prostheses with those of host tissue. The objective of the present study is to clarify the viscoelastic behavior of human saphenous tissue. The stress relaxation tests in circumferential and longitudinal direction were done in this vein by exerting 20% and 50% strains. Considering the stress relaxation curves obtained from stress relaxation tests and the coefficients of the standard solid model, it was demonstrated that the saphenous vein has a non-linear viscoelastic behavior. Thereafter, the fitting with Fung’s quasilinear viscoelastic (QLV) model was performed based on stress relaxation time curves. Finally, the coefficients of Fung’s QLV model, which models the behavior of saphenous tissue very well, were presented.

Keywords: Fung’s quasilinear viscoelastic (QLV) model, strain rate, stress relaxation test, uniaxial tensile test, viscoelastic behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 746
290 Sensitivity and Reliability Analysis of Masonry Infilled Frames

Authors: Avadhoot Bhosale, Robin Davis P., Pradip Sarkar

Abstract:

The seismic performance of buildings with irregular distribution of mass, stiffness and strength along the height may be significantly different from that of regular buildings with masonry infill. Masonry infilled reinforced concrete (RC) frames are very common structural forms used for multi-storey building construction. These structures are found to perform better in past earthquakes owing to additional strength, stiffness and energy dissipation in the infill walls. The seismic performance of a building depends on the variation of material, structural and geometrical properties. The sensitivity of these properties affects the seismic response of the building. The main objective of the sensitivity analysis is to found out the most sensitive parameter that affects the response of the building. This paper presents a sensitivity analysis by considering 5% and 95% probability value of random variable in the infills characteristics, trying to obtain a reasonable range of results representing a wide number of possible situations that can be met in practice by using pushover analysis. The results show that the strength-related variation values of concrete and masonry, with the exception of tensile strength of the concrete, have shown a significant effect on the structural performance and that this effect increases with the progress of damage condition for the concrete. The seismic risk assessments of the selected frames are expressed in terms of reliability index.

Keywords: Fragility curve, sensitivity analysis, reliability index, RC frames.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1162
289 Emergency Generator Sizing and Motor Starting Analysis

Authors: Mukesh Kumar Kirar, Ganga Agnihotri

Abstract:

This paper investigates the preliminary sizing of generator set to design electrical system at the early phase of a project, dynamic behavior of generator-unit, as well as induction motors, during start-up of the induction motor drives fed from emergency generator unit. The information in this paper simplifies generator set selection and eliminates common errors in selection. It covers load estimation, step loading capacity test, transient analysis for the emergency generator set. The dynamic behavior of the generator-unit, power, power factor, voltage, during Direct-on-Line start-up of the induction motor drives fed from stand alone gene-set is also discussed. It is important to ensure that plant generators operate safely and consistently, power system studies are required at the planning and conceptual design stage of the project. The most widely recognized and studied effect of motor starting is the voltage dip that is experienced throughout an industrial power system as the direct online result of starting large motors. Generator step loading capability and transient voltage dip during starting of largest motor is ensured with the help of Electrical Transient Analyzer Program (ETAP).

Keywords: Sizing, induction motor starting, load estimation, Transient Analyzer Program (ETAP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13922
288 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis

Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen

Abstract:

Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.

Keywords: Hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597
287 Power Ultrasound Application on Convective Drying of Banana (Musa paradisiaca), Mango (Mangifera indica L.) and Guava (Psidium guajava L.)

Authors: Erika K. Méndez, Carlos E. Orrego, Diana L. Manrique, Juan D. Gonzalez, Doménica Vallejo

Abstract:

High moisture content in fruits generates post-harvest problems such as mechanical, biochemical, microbial and physical losses. Dehydration, which is based on the reduction of water activity of the fruit, is a common option for overcoming such losses. However, regular hot air drying could affect negatively the quality properties of the fruit due to the long residence time at high temperature. Power ultrasound (US) application during the convective drying has been used as a novel method able to enhance drying rate and, consequently, to decrease drying time. In the present study, a new approach was tested to evaluate the effect of US on the drying time, the final antioxidant activity (AA) and the total polyphenol content (TPC) of banana slices (BS), mango slices (MS) and guava slices (GS). There were also studied the drying kinetics with nine different models from which water effective diffusivities (Deff) (with or without shrinkage corrections) were calculated. Compared with the corresponding control tests, US assisted drying for fruit slices showed reductions in drying time between 16.23 and 30.19%, 11.34 and 32.73%, and 19.25 and 47.51% for the MS, BS and GS respectively. Considering shrinkage effects, Deff calculated values ranged from 1.67*10-10 to 3.18*10-10 m2/s, 3.96*10-10 and 5.57*10-10 m2/s and 4.61*10-10 to 8.16*10-10 m2/s for the BS, MS and GS samples respectively. Reductions of TPC and AA (as DPPH) were observed compared with the original content in fresh fruit data in all kinds of drying assays.

Keywords: Banana, drying, effective diffusivity, guava, mango, ultrasound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2377
286 Selection of Best Band Combination for Soil Salinity Studies using ETM+ Satellite Images (A Case study: Nyshaboor Region,Iran)

Authors: Sanaeinejad, S. H.; A. Astaraei, . P. Mirhoseini.Mousavi, M. Ghaemi,

Abstract:

One of the main environmental problems which affect extensive areas in the world is soil salinity. Traditional data collection methods are neither enough for considering this important environmental problem nor accurate for soil studies. Remote sensing data could overcome most of these problems. Although satellite images are commonly used for these studies, however there are still needs to find the best calibration between the data and real situations in each specified area. Neyshaboor area, North East of Iran was selected as a field study of this research. Landsat satellite images for this area were used in order to prepare suitable learning samples for processing and classifying the images. 300 locations were selected randomly in the area to collect soil samples and finally 273 locations were reselected for further laboratory works and image processing analysis. Electrical conductivity of all samples was measured. Six reflective bands of ETM+ satellite images taken from the study area in 2002 were used for soil salinity classification. The classification was carried out using common algorithms based on the best composition bands. The results showed that the reflective bands 7, 3, 4 and 1 are the best band composition for preparing the color composite images. We also found out, that hybrid classification is a suitable method for identifying and delineation of different salinity classes in the area.

Keywords: Soil salinity, Remote sensing, Image processing, ETM+, Nyshaboor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1986
285 Standalone Docking Station with Combined Charging Methods for Agricultural Mobile Robots

Authors: Leonor Varandas, Pedro D. Gaspar, Martim L. Aguiar

Abstract:

One of the biggest concerns in the field of agriculture is around the energy efficiency of robots that will perform agriculture’s activity and their charging methods. In this paper, two different charging methods for agricultural standalone docking stations are shown that will take into account various variants as field size and its irregularities, work’s nature to which the robot will perform, deadlines that have to be respected, among others. Its features also are dependent on the orchard, season, battery type and its technical specifications and cost. First charging base method focuses on wireless charging, presenting more benefits for small field. The second charging base method relies on battery replacement being more suitable for large fields, thus avoiding the robot stop for recharge. Existing many methods to charge a battery, the CC CV was considered the most appropriate for either simplicity or effectiveness. The choice of the battery for agricultural purposes is if most importance. While the most common battery used is Li-ion battery, this study also discusses the use of graphene-based new type of batteries with 45% over capacity to the Li-ion one. A Battery Management Systems (BMS) is applied for battery balancing. All these approaches combined showed to be a promising method to improve a lot of technical agricultural work, not just in terms of plantation and harvesting but also about every technique to prevent harmful events like plagues and weeds or even to reduce crop time and cost.

Keywords: Agricultural mobile robot, charging base methods, battery replacement method, wireless charging method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 787
284 Authenticity of Lipid and Soluble Sugar Profiles of Various Oat Cultivars (Avena sativa)

Authors: Marijana M. Ačanski, Kristian A. Pastor, Djura N. Vujić

Abstract:

The identification of lipid and soluble sugar components in flour samples of different cultivars belonging to common oat species (Avena sativa L.) was performed: spring oat, winter oat and hulless oat. Fatty acids were extracted from flour samples with n-hexane, and derivatized into volatile methyl esters, using TMSH (trimethylsulfonium hydroxide in methanol). Soluble sugars were then extracted from defatted and dried samples of oat flour with 96% ethanol, and further derivatized into corresponding TMS-oximes, using hydroxylamine hydrochloride solution and BSTFA (N,O-bis-(trimethylsilyl)-trifluoroacetamide). The hexane and ethanol extracts of each oat cultivar were analyzed using GC-MS system. Lipid and simple sugar compositions are very similar in all samples of investigated cultivars. Chemometric tool was applied to numeric values of automatically integrated surface areas of detected lipid and simple sugar components in their corresponding derivatized forms. Hierarchical cluster analysis shows a very high similarity between the investigated flour samples of oat cultivars, according to the fatty acid content (0.9955). Moderate similarity was observed according to the content of soluble sugars (0.50). These preliminary results support the idea of establishing methods for oat flour authentication, and provide the means for distinguishing oat flour samples, regardless of the variety, from flour samples made of other cereal species, just by lipid and simple sugar profile analysis.

Keywords: Authentication, chemometrics, GC-MS, lipid and soluble sugar composition, oat cultivars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332
283 BeamGA Median: A Hybrid Heuristic Search Approach

Authors: Ghada Badr, Manar Hosny, Nuha Bintayyash, Eman Albilali, Souad Larabi Marie-Sainte

Abstract:

The median problem is significantly applied to derive the most reasonable rearrangement phylogenetic tree for many species. More specifically, the problem is concerned with finding a permutation that minimizes the sum of distances between itself and a set of three signed permutations. Genomes with equal number of genes but different order can be represented as permutations. In this paper, an algorithm, namely BeamGA median, is proposed that combines a heuristic search approach (local beam) as an initialization step to generate a number of solutions, and then a Genetic Algorithm (GA) is applied in order to refine the solutions, aiming to achieve a better median with the smallest possible reversal distance from the three original permutations. In this approach, any genome rearrangement distance can be applied. In this paper, we use the reversal distance. To the best of our knowledge, the proposed approach was not applied before for solving the median problem. Our approach considers true biological evolution scenario by applying the concept of common intervals during the GA optimization process. This allows us to imitate a true biological behavior and enhance genetic approach time convergence. We were able to handle permutations with a large number of genes, within an acceptable time performance and with same or better accuracy as compared to existing algorithms.

Keywords: Median problem, phylogenetic tree, permutation, genetic algorithm, beam search, genome rearrangement distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 940
282 Development of a Cost Effective Two Wheel Tractor Mounted Mobile Maize Sheller for Small Farmers in Bangladesh

Authors: M. Israil Hossain, T. P. Tiwari, Ashrafuzzaman Gulandaz, Nusrat Jahan

Abstract:

Two-wheel tractor (power tiller) is a common tillage tool in Bangladesh agriculture for easy access in fragmented land with affordable price of small farmers. Traditional maize sheller needs to be carried from place to place by hooking with two-wheel tractor (2WT) and set up again for shelling operation which takes longer time for preparation of maize shelling. The mobile maize sheller eliminates the transportation problem and can start shelling operation instantly any place as it is attached together with 2WT. It is counterclockwise rotating cylinder, axial flow type sheller, and grain separated with a frictional force between spike tooth and concave. The maize sheller is attached with nuts and bolts in front of the engine base of 2WT. The operating power of the sheller comes from the fly wheel of the engine of the tractor through ‘V” belt pulley arrangement. The average shelling capacity of the mobile sheller is 2.0 t/hr, broken kernel 2.2%, and shelling efficiency 97%. The average maize shelling cost is Tk. 0.22/kg and traditional custom hire rate is Tk.1.0/kg, respectively (1 US$=Tk.78.0). The service provider of the 2WT can transport the mobile maize sheller long distance in operator’s seating position. The manufacturers started the fabrication of mobile maize sheller. This mobile maize sheller is also compatible for the other countries where 2WT is available for farming operation.

Keywords: Cost effective, mobile maize sheller, maize shelling capacity, small farmers, two-wheel tractor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 872
281 AJcFgraph - AspectJ Control Flow Graph Builder for Aspect-Oriented Software

Authors: Reza Meimandi Parizi, Abdul Azim Abdul Ghani

Abstract:

The ever-growing usage of aspect-oriented development methodology in the field of software engineering requires tool support for both research environments and industry. So far, tool support for many activities in aspect-oriented software development has been proposed, to automate and facilitate their development. For instance, the AJaTS provides a transformation system to support aspect-oriented development and refactoring. In particular, it is well established that the abstract interpretation of programs, in any paradigm, pursued in static analysis is best served by a high-level programs representation, such as Control Flow Graph (CFG). This is why such analysis can more easily locate common programmatic idioms for which helpful transformation are already known as well as, association between the input program and intermediate representation can be more closely maintained. However, although the current researches define the good concepts and foundations, to some extent, for control flow analysis of aspectoriented programs but they do not provide a concrete tool that can solely construct the CFG of these programs. Furthermore, most of these works focus on addressing the other issues regarding Aspect- Oriented Software Development (AOSD) such as testing or data flow analysis rather than CFG itself. Therefore, this study is dedicated to build an aspect-oriented control flow graph construction tool called AJcFgraph Builder. The given tool can be applied in many software engineering tasks in the context of AOSD such as, software testing, software metrics, and so forth.

Keywords: Aspect-Oriented Software Development, AspectJ, Control Flow Graph, Data Flow Analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2057
280 Pragati Node Popularity (PNP) Approach to Identify Congestion Hot Spots in MPLS

Authors: E. Ramaraj, A. Padmapriya

Abstract:

In large Internet backbones, Service Providers typically have to explicitly manage the traffic flows in order to optimize the use of network resources. This process is often referred to as Traffic Engineering (TE). Common objectives of traffic engineering include balance traffic distribution across the network and avoiding congestion hot spots. Raj P H and SVK Raja designed the Bayesian network approach to identify congestion hors pots in MPLS. In this approach for every node in the network the Conditional Probability Distribution (CPD) is specified. Based on the CPD the congestion hot spots are identified. Then the traffic can be distributed so that no link in the network is either over utilized or under utilized. Although the Bayesian network approach has been implemented in operational networks, it has a number of well known scaling issues. This paper proposes a new approach, which we call the Pragati (means Progress) Node Popularity (PNP) approach to identify the congestion hot spots with the network topology alone. In the new Pragati Node Popularity approach, IP routing runs natively over the physical topology rather than depending on the CPD of each node as in Bayesian network. We first illustrate our approach with a simple network, then present a formal analysis of the Pragati Node Popularity approach. Our PNP approach shows that for any given network of Bayesian approach, it exactly identifies the same result with minimum efforts. We further extend the result to a more generic one: for any network topology and even though the network is loopy. A theoretical insight of our result is that the optimal routing is always shortest path routing with respect to some considerations of hot spots in the networks.

Keywords: Conditional Probability Distribution, Congestion hotspots, Operational Networks, Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1932
279 Weighted-Distance Sliding Windows and Cooccurrence Graphs for Supporting Entity-Relationship Discovery in Unstructured Text

Authors: Paolo Fantozzi, Luigi Laura, Umberto Nanni

Abstract:

The problem of Entity relation discovery in structured data, a well covered topic in literature, consists in searching within unstructured sources (typically, text) in order to find connections among entities. These can be a whole dictionary, or a specific collection of named items. In many cases machine learning and/or text mining techniques are used for this goal. These approaches might be unfeasible in computationally challenging problems, such as processing massive data streams. A faster approach consists in collecting the cooccurrences of any two words (entities) in order to create a graph of relations - a cooccurrence graph. Indeed each cooccurrence highlights some grade of semantic correlation between the words because it is more common to have related words close each other than having them in the opposite sides of the text. Some authors have used sliding windows for such problem: they count all the occurrences within a sliding windows running over the whole text. In this paper we generalise such technique, coming up to a Weighted-Distance Sliding Window, where each occurrence of two named items within the window is accounted with a weight depending on the distance between items: a closer distance implies a stronger evidence of a relationship. We develop an experiment in order to support this intuition, by applying this technique to a data set consisting in the text of the Bible, split into verses.

Keywords: Cooccurrence graph, entity relation graph, unstructured text, weighted distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 643
278 HPTLC Fingerprint Profiling of Protorhus longifolia Methanolic Leaf Extract and Qualitative Analysis of Common Biomarkers

Authors: P. S. Seboletswe, Z. Mkhize, L. M. Katata-Seru

Abstract:

Protorhus longifolia is known as a medicinal plant that has been used traditionally to treat various ailments such as hemiplegic paralysis, blood clotting related diseases, diarrhoea, heartburn, etc. The study reports a High-Performance Thin Layer Chromatography (HPTLC) fingerprint profile of Protorhus longifolia methanolic extract and its qualitative analysis of gallic acid, rutin, and quercetin. HPTLC analysis was achieved using CAMAG HPTLC system equipped with CAMAG automatic TLC sampler 4, CAMAG Automatic Developing Chamber 2 (ADC2), CAMAG visualizer 2, CAMAG Thin Layer Chromatography (TLC) scanner and visionCATS CAMAG HPTLC software. Mobile phase comprising toluene, ethyl acetate, formic acid (21:15:3) was used for qualitative analysis of gallic acid and revealed eight peaks while the mobile phase containing ethyl acetate, water, glacial acetic acid, formic acid (100:26:11:11) for qualitative analysis of rutin and quercetin revealed six peaks. HPTLC sillica gel 60 F254 glass plates (10 × 10) were used as the stationary phase. Gallic acid was detected at the Rf = 0.35; while rutin and quercetin were not evident in the extract. Further studies will be performed to quantify gallic acid in Protorhus longifolia leaves and also identify other biomarkers.

Keywords: Biomarkers, fingerprint profiling, gallic acid, HPTLC, Protorhus longifolia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 796
277 Incidence, Occurrence, Classification and Outcome of Small Animal Fractures: A Retrospective Study (2005-2010)

Authors: L. M. Ben Ali

Abstract:

A retrospective study was undertaken to record the occurrence and pattern of fractures in small animals (dogs and cats) from year 2005 to 2010. A total of 650 cases were presented in small animal surgery unit out of which of 116 (dogs and cats) were presented with history of fractures of different bones. A total of 17.8% (116/650) cases were of fractures which constituted dogs 67% while cats were 23%. The majority of animals were intact. Trauma in the form of road side accident was the principal cause of fractures in dogs whereas as in cats it was fall from height. The ages of the fractured dog ranged from 4 months to 12 years whereas in cat it was from 4 weeks to 10 years. The femoral fractures represented 37.5% and 25% respectively in dogs and cats. Diaphysis, distal metaphyseal and supracondylar fractures were the most affected sites in dog and cats. Tibial fracture in dogs and cats represented 21.5% and 10% while humoral fractures were 7.9% and 14% in dogs and cats respectively. Humoral condyler fractures were most commonly seen in puppies aged 4 to 6 months. Fractured radius-ulna incidence was 19% and 14% in dogs and cats respectively. Other fractures recorded were of lumbar vertebrae, mandible and metacarpals etc. The management comprised of external and internal fixation in both the species. The most common internal fixation technique employed was Intramedullary fixation in long followed by other methods like stack or cross pinning, wiring etc as per findings in the cases. The cast bandage was used majorly as mean for external coaptation. The paper discusses the outcome of the case as per the technique employed.

Keywords: Animal, Fracture, Incidence, Occurrence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4577