Search results for: Data Centric Approach
8284 A Similarity Function for Global Quality Assessment of Retinal Vessel Segmentations
Authors: Arturo Aquino, Manuel Emilio Gegundez, Jose Manuel Bravo, Diego Marin
Abstract:
Retinal vascularity assessment plays an important role in diagnosis of ophthalmic pathologies. The employment of digital images for this purpose makes possible a computerized approach and has motivated development of many methods for automated vascular tree segmentation. Metrics based on contingency tables for binary classification have been widely used for evaluating performance of these algorithms and, concretely, the accuracy has been mostly used as measure of global performance in this topic. However, this metric shows very poor matching with human perception as well as other notable deficiencies. Here, a new similarity function for measuring quality of retinal vessel segmentations is proposed. This similarity function is based on characterizing the vascular tree as a connected structure with a measurable area and length. Tests made indicate that this new approach shows better behaviour than the current one does. Generalizing, this concept of measuring descriptive properties may be used for designing functions for measuring more successfully segmentation quality of other complex structures.
Keywords: Retinal vessel segmentation, quality assessment, performanceevaluation, similarity function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15008283 Tension Stiffening Parameter in Composite Concrete Reinforced with Inoxydable Steel: Laboratory and Finite Element Analysis
Abstract:
In the present work, behavior of inoxydable steel as reinforcement bar in composite concrete is being investigated. The bar-concrete adherence in reinforced concrete (RC) beam is studied and focus is made on the tension stiffening parameter. This study highlighted an approach to observe this interaction behavior in bending test instead of direct tension as per reported in many references. The approach resembles actual loading condition of the structural RC beam. The tension stiffening properties are then applied to numerical finite element analysis (FEA) to verify their correlation with laboratory results. Comparison with laboratory shows a good correlation between the two. The experimental settings is able to determine tension stiffening parameters in RC beam and the modeling strategies made in ABAQUS can closely represent the actual condition. Tension stiffening model used can represent the interaction properties between inoxydable steel and concrete.Keywords: Inoxydable steel, Finite element modeling, Reinforced concrete beam, Tension-stiffening.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42968282 Automatic Detection and Classification of Microcalcification, Mass, Architectural Distortion and Bilateral Asymmetry in Digital Mammogram
Authors: S. Shanthi, V. Muralibhaskaran
Abstract:
Mammography has been one of the most reliable methods for early detection of breast cancer. There are different lesions which are breast cancer characteristic such as microcalcifications, masses, architectural distortions and bilateral asymmetry. One of the major challenges of analysing digital mammogram is how to extract efficient features from it for accurate cancer classification. In this paper we proposed a hybrid feature extraction method to detect and classify all four signs of breast cancer. The proposed method is based on multiscale surrounding region dependence method, Gabor filters, multi fractal analysis, directional and morphological analysis. The extracted features are input to self adaptive resource allocation network (SRAN) classifier for classification. The validity of our approach is extensively demonstrated using the two benchmark data sets Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammograph (DDSM) and the results have been proved to be progressive.
Keywords: Feature extraction, fractal analysis, Gabor filters, multiscale surrounding region dependence method, SRAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29448281 Data Mining Techniques in Computer-Aided Diagnosis: Non-Invasive Cancer Detection
Authors: Florin Gorunescu
Abstract:
Diagnosis can be achieved by building a model of a certain organ under surveillance and comparing it with the real time physiological measurements taken from the patient. This paper deals with the presentation of the benefits of using Data Mining techniques in the computer-aided diagnosis (CAD), focusing on the cancer detection, in order to help doctors to make optimal decisions quickly and accurately. In the field of the noninvasive diagnosis techniques, the endoscopic ultrasound elastography (EUSE) is a recent elasticity imaging technique, allowing characterizing the difference between malignant and benign tumors. Digitalizing and summarizing the main EUSE sample movies features in a vector form concern with the use of the exploratory data analysis (EDA). Neural networks are then trained on the corresponding EUSE sample movies vector input in such a way that these intelligent systems are able to offer a very precise and objective diagnosis, discriminating between benign and malignant tumors. A concrete application of these Data Mining techniques illustrates the suitability and the reliability of this methodology in CAD.Keywords: Endoscopic ultrasound elastography, exploratorydata analysis, neural networks, non-invasive cancer detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18678280 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.
Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13148279 TELUM Land Use Model: An Investigation of Data Requirements and Calibration Results for Chittenden County MPO, U.S.A.
Authors: Georgia Pozoukidou
Abstract:
TELUM software is a land use model designed specifically to help metropolitan planning organizations (MPOs) prepare their transportation improvement programs and fulfill their numerous planning responsibilities. In this context obtaining, preparing, and validating socioeconomic forecasts are becoming fundamental tasks for an MPO in order to ensure that consistent population and employment data are provided to travel demand models. Chittenden County Metropolitan Planning Organization of Vermont State was used as a case study to test the applicability of TELUM land use model. The technical insights and lessons learned from the land use model application have transferable value for all MPOs faced with land use forecasting development and transportation modeling.
Keywords: Calibration data requirements, land use models, land use planning, Metropolitan Planning Organizations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21008278 Production Line Layout Planning Based on Complexity Measurement
Authors: Guoliang Fan, Aiping Li, Nan Xie, Liyun Xu, Xuemei Liu
Abstract:
Mass customization production increases the difficulty of the production line layout planning. The material distribution process for variety of parts is very complex, which greatly increases the cost of material handling and logistics. In response to this problem, this paper presents an approach of production line layout planning based on complexity measurement. Firstly, by analyzing the influencing factors of equipment layout, the complexity model of production line is established by using information entropy theory. Then, the cost of the part logistics is derived considering different variety of parts. Furthermore, the function of optimization including two objectives of the lowest cost, and the least configuration complexity is built. Finally, the validity of the function is verified in a case study. The results show that the proposed approach may find the layout scheme with the lowest logistics cost and the least complexity. Optimized production line layout planning can effectively improve production efficiency and equipment utilization with lowest cost and complexity.
Keywords: Production line, layout planning, complexity measurement, optimization, mass customization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10888277 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques
Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian
Abstract:
Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.Keywords: Data mining, K-means, road traffic accidents, Waze, Weka.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12158276 Hybrid Control Mode Based On Multi-Sensor Information by Fuzzy Approach for Navigation Task of Autonomous Mobile Robot
Authors: Jonqlan Lin, C. Y. Tasi, K. H. Lin
Abstract:
This paper addresses the issue of the autonomous mobile robot (AMR) navigation task based on the hybrid control modes. The novel hybrid control mode, based on multi-sensors information by using the fuzzy approach, has been presented in this research. The system operates in real time, is robust, enables the robot to operate with imprecise knowledge, and takes into account the physical limitations of the environment in which the robot moves, obtaining satisfactory responses for a large number of different situations. An experiment is simulated and carried out with a pioneer mobile robot. From the experimental results, the effectiveness and usefulness of the proposed AMR obstacle avoidance and navigation scheme are confirmed. The experimental results show the feasibility, and the control system has improved the navigation accuracy. The implementation of the controller is robust, has a low execution time, and allows an easy design and tuning of the fuzzy knowledge base.
Keywords: Autonomous mobile robot, obstacle avoidance, MEMS, hybrid control mode, navigation control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22088275 Handover Strategies Challenges in Wireless ATM Networks
Authors: Jamila Bhar, Ridha Ouni, Kholdoun Torki, Salem Nasri
Abstract:
To support user mobility for a wireless network new mechanisms are needed and are fundamental, such as paging, location updating, routing, and handover. Also an important key feature is mobile QoS offered by the WATM. Several ATM network protocols should be updated to implement mobility management and to maintain the already ATM QoS over wireless ATM networks. A survey of the various schemes and types of handover is provided. Handover procedure allows guarantee the terminal connection reestablishment when it moves between areas covered by different base stations. It is useful to satisfy user radio link transfer without interrupting a connection. However, failure to offer efficient solutions will result in handover important packet loss, severe delays and degradation of QoS offered to the applications. This paper reviews the requirements, characteristics and open issues of wireless ATM, particularly with regard to handover. It introduces key aspects of WATM and mobility extensions, which are added in the fixed ATM network. We propose a flexible approach for handover management that will minimize the QoS deterioration. Functional entities of this flexible approach are discussed in order to achieve minimum impact on the connection quality when a MT crosses the BS.
Keywords: Handover, HDL synthesis, QoS, Wireless ATM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19488274 Agent/Group/Role Organizational Model to Simulate an Industrial Control System
Authors: Noureddine Seddari, Mohamed Belaoued, Salah Bougueroua
Abstract:
The modeling of complex systems is generally based on the decomposition of their components into sub-systems easier to handle. This division has to be made in a methodical way. In this paper, we introduce an industrial control system modeling and simulation based on the Multi-Agent System (MAS) methodology AALAADIN and more particularly the underlying conceptual model Agent/Group/Role (AGR). Indeed, in this division using AGR model, the overall system is decomposed into sub-systems in order to improve the understanding of regulation and control systems, and to simplify the implementation of the obtained agents and their groups, which are implemented using the Multi-Agents Development KIT (MAD-KIT) platform. This approach appears to us to be the most appropriate for modeling of this type of systems because, due to the use of MAS, it is possible to model real systems in which very complex behaviors emerge from relatively simple and local interactions between many different individuals, therefore a MAS is well adapted to describe a system from the standpoint of the activity of its components, that is to say when the behavior of the individuals is complex (difficult to describe with equations). The main aim of this approach is the take advantage of the performance, the scalability and the robustness that are intuitively provided by MAS.
Keywords: Complex systems, modeling and simulation, industrial control system, MAS, AALAADIN, AGR, MAD-KIT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11898273 Using ALOHA Code to Evaluate CO2 Concentration for Maanshan Nuclear Power Plant
Authors: W. S. Hsu, S. W. Chen, Y. T. Ku, Y. Chiang, J. R. Wang , J. H. Yang, C. Shih
Abstract:
ALOHA code was used to calculate the concentration under the CO2 storage burst condition for Maanshan nuclear power plant (NPP) in this study. Five main data are input into ALOHA code including location, building, chemical, atmospheric, and source data. The data from Final Safety Analysis Report (FSAR) and some reports were used in this study. The ALOHA results are compared with the failure criteria of R.G. 1.78 to confirm the habitability of control room. The result of comparison presents that the ALOHA result is below the R.G. 1.78 criteria. This implies that the habitability of control room can be maintained in this case. The sensitivity study for atmospheric parameters was performed in this study. The results show that the wind speed has the larger effect in the concentration calculation.
Keywords: PWR, ALOHA, habitability, Maanshan.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7428272 Dimension Free Rigid Point Set Registration in Linear Time
Authors: Jianqin Qu
Abstract:
This paper proposes a rigid point set matching algorithm in arbitrary dimensions based on the idea of symmetric covariant function. A group of functions of the points in the set are formulated using rigid invariants. Each of these functions computes a pair of correspondence from the given point set. Then the computed correspondences are used to recover the unknown rigid transform parameters. Each computed point can be geometrically interpreted as the weighted mean center of the point set. The algorithm is compact, fast, and dimension free without any optimization process. It either computes the desired transform for noiseless data in linear time, or fails quickly in exceptional cases. Experimental results for synthetic data and 2D/3D real data are provided, which demonstrate potential applications of the algorithm to a wide range of problems.Keywords: Covariant point, point matching, dimension free, rigid registration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6828271 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9948270 A Secure Proxy Signature Scheme with Fault Tolerance Based on RSA System
Authors: H. El-Kamchouchi, Heba Gaber, Fatma Ahmed, Dalia H. El-Kamchouchi
Abstract:
Due to the rapid growth in modern communication systems, fault tolerance and data security are two important issues in a secure transaction. During the transmission of data between the sender and receiver, errors may occur frequently. Therefore, the sender must re-transmit the data to the receiver in order to correct these errors, which makes the system very feeble. To improve the scalability of the scheme, we present a secure proxy signature scheme with fault tolerance over an efficient and secure authenticated key agreement protocol based on RSA system. Authenticated key agreement protocols have an important role in building a secure communications network between the two parties.
Keywords: Proxy signature, fault tolerance, RSA, key agreement protocol.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14848269 Autonomous Flight Performance Improvement of Load-Carrying Unmanned Aerial Vehicles by Active Morphing
Authors: Tugrul Oktay, Mehmet Konar, Mohamed Abdallah Mohamed, Murat Aydin, Firat Sal, Murat Onay, Mustafa Soylak
Abstract:
In this paper, it is aimed to improve autonomous flight performance of a load-carrying (payload: 3 kg and total: 6kg) unmanned aerial vehicle (UAV) through active wing and horizontal tail active morphing and also integrated autopilot system parameters (i.e. P, I, D gains) and UAV parameters (i.e. extension ratios of wing and horizontal tail during flight) design. For this purpose, a loadcarrying UAV (i.e. ZANKA-II) is manufactured in Erciyes University, College of Aviation, Model Aircraft Laboratory is benefited. Optimum values of UAV parameters and autopilot parameters are obtained using a stochastic optimization method. Using this approach autonomous flight performance of UAV is substantially improved and also in some adverse weather conditions an opportunity for safe flight is satisfied. Active morphing and integrated design approach gives confidence, high performance and easy-utility request of UAV users.Keywords: Unmanned aerial vehicles, morphing, autopilots, autonomous performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22698268 A Numerical Framework to Investigate Intake Aerodynamics Behavior in Icing Conditions
Authors: Ali Mirmohammadi, Arash Taheri, Meysam Mohammadi-Amin
Abstract:
One of the major parts of a jet engine is air intake, which provides proper and required amount of air for the engine to operate. There are several aerodynamic parameters which should be considered in design, such as distortion, pressure recovery, etc. In this research, the effects of lip ice accretion on pitot intake performance are investigated. For ice accretion phenomenon, two supervised multilayer neural networks (ANN) are designed, one for ice shape prediction and another one for ice roughness estimation based on experimental data. The Fourier coefficients of transformed ice shape and parameters include velocity, liquid water content (LWC), median volumetric diameter (MVD), spray time and temperature are used in neural network training. Then, the subsonic intake flow field is simulated numerically using 2D Navier-Stokes equations and Finite Volume approach with Hybrid mesh includes structured and unstructured meshes. The results are obtained in different angles of attack and the variations of intake aerodynamic parameters due to icing phenomenon are discussed. The results show noticeable effects of ice accretion phenomenon on intake behavior.Keywords: Artificial Neural Network, Ice Accretion, IntakeAerodynamics, Design Parameters, Finite Volume Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22038267 Thermophysical and Heat Transfer Performance of Covalent and Noncovalent Functionalized Graphene Nanoplatelet-Based Water Nanofluids in an Annular Heat Exchanger
Authors: Hamed K. Arzani, Ahmad Amiri, Hamid K. Arzani, Salim Newaz Kazi, Ahmad Badarudin
Abstract:
The new design of heat exchangers utilizing an annular distributor opens a new gateway for realizing higher energy optimization. To realize this goal, graphene nanoplatelet-based water nanofluids with promising thermophysical properties were synthesized in the presence of covalent and noncovalent functionalization. Thermal conductivity, density, viscosity and specific heat capacity were investigated and employed as a raw data for ANSYS-Fluent to be used in two-phase approach. After validation of obtained results by analytical equations, two special parameters of convective heat transfer coefficient and pressure drop were investigated. The study followed by studying other heat transfer parameters of annular pass in the presence of graphene nanopletelesbased water nanofluids at different weight concentrations, input powers and temperatures. As a result, heat transfer performance and friction loss are predicted for both synthesized nanofluids.Keywords: Heat transfer, nanofluid, turbulent flow, forced convection flow, graphene nanoplatelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21718266 Topology of Reverse Von-Kármán Vortex Street in the Wake of a Swimming Whale Shark
Authors: Arash Taheri
Abstract:
In this paper, effects of the ventral body planform of a swimming whale shark on the formation of ‘reverse von-Kármán vortex street’ behind the aquatic animal are studied using Fluid-Structure Interaction (FSI) approach. In this regard, incompressible Navier-Stokes equations around the whale shark’s body with a prescribed deflection dynamics are solved with the aid of Boundary Data Immersion Method (BDIM) and Implicit Large Eddy Simulation (ILES) turbulence treatment by WaterLily.jl solver; fully-written in Julia programming language. The whale shark flow simulations here are performed at high Reynolds number, i.e. 1.4 107 corresponding to the swimming of a 10 meter-whale shark at an average speed of 5 km/h. For comparison purposes, vortical flow generation behind a silky shark with a streamlined forehead eidonomy is also simulated at high Reynolds number, Re = 2 106, corresponding to the swimming of a 2 meter-silky shark at an average speed of 3.6 km/h. The results depict formation of distinct wake topologies behind the swimming sharks depending on the travelling wave oscillating amplitudes.
Keywords: Whale shark, vortex street, BDIM, FSI, functional eidonomy, bionics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12788265 Closing Africa’s Infrastructure Deficit: The Role of Gender Responsiveness in Urban Planning
Authors: K. Buyana, S. Lwasa, L. Schiebinger
Abstract:
Although urbanization in Africa has been characterized by fragile socio-economic successes, the sustainability of city infrastructure is now central to planning processes as a pathway to closing the deficit in terms of coverage and access. This paper builds on survey and interview data from Kampala city, to demonstrate how the principle gender responsiveness can inform improvements in urban infrastructure and service delivery. We discovered that women prefer infrastructure that combines living and working spaces for reduced labour and travel burdens between homes, markets, schools, and other urban spaces. Men’s conception of infrastructure needs on the other hand, mirrored public security and connectivity concerns along city streets and work places. However, the urban planning approach at city-level is guided by mainstream engineering and architectural designs that do not necessarily reflect the social context within which urban infrastructure influences gender roles and the attendant mobility needs. To address the challenge across cities of similar context, the paper concludes with a set of analytic steps on how the gendered influences on infrastructure-use can be considered in urban planning cycles.
Keywords: African cities, gender responsiveness, city infrastructure, urban planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21778264 A Systematic Approach for Identifying Turning Center Capabilities with Vertical Machining Center in Milling Operation
Abstract:
Conventional machining is a form of subtractive manufacturing, in which a collection of material-working processes utilizing power-driven machine tools are used to remove undesired material to achieve a desired geometry. This paper presents an approach for comparison between turning center and vertical machining center by optimization of cutting parameters at cylindrical workpieces leading to minimum surface roughness by using taguchi methodology. Aluminum alloy was taken to conduct experiments due to its unique high strength-weight ratio that is maintained at elevated temperatures and their exceptional corrosion resistance. During testing, the effects of the cutting parameters on the surface roughness were investigated. Additionally, by using taguchi methodology for each of the cutting parameters (spindle speed, depth of cut, insert diameter, and feed rate) minimum surface roughness for the process of turn-milling was determined according to the cutting parameters. A confirmation experiment demonstrates the effectiveness of taguchi method.
Keywords: Surface roughness, taguchi parameter design, turning center, turn-milling operations, vertical machining center.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25238263 Investigation of Learning Challenges in Building Measurement Unit
Authors: Argaw T. Gurmu, Muhammad N. Mahmood
Abstract:
The objective of this research is to identify the architecture and construction management students’ learning challenges of the building measurement. This research used the survey data obtained collected from the students who completed the building measurement unit. NVivo qualitative data analysis software was used to identify relevant themes. The analysis of the qualitative data revealed the major learning difficulties such as inadequacy of practice questions for the examination, inability to work as a team, lack of detailed understanding of the prerequisite units, insufficiency of the time allocated for tutorials and incompatibility of lecture and tutorial schedules. The output of this research can be used as a basis for improving the teaching and learning activities in construction measurement units.
Keywords: Building measurement, construction management, learning challenges, evaluate survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11048262 An Efficient and Generic Hybrid Framework for High Dimensional Data Clustering
Authors: Dharmveer Singh Rajput , P. K. Singh, Mahua Bhattacharya
Abstract:
Clustering in high dimensional space is a difficult problem which is recurrent in many fields of science and engineering, e.g., bioinformatics, image processing, pattern reorganization and data mining. In high dimensional space some of the dimensions are likely to be irrelevant, thus hiding the possible clustering. In very high dimensions it is common for all the objects in a dataset to be nearly equidistant from each other, completely masking the clusters. Hence, performance of the clustering algorithm decreases. In this paper, we propose an algorithmic framework which combines the (reduct) concept of rough set theory with the k-means algorithm to remove the irrelevant dimensions in a high dimensional space and obtain appropriate clusters. Our experiment on test data shows that this framework increases efficiency of the clustering process and accuracy of the results.Keywords: High dimensional clustering, sub-space, k-means, rough set, discernibility matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19498261 Designing Software Quality Measurement System for Telecommunication Industry Using Object-Oriented Technique
Authors: Nor Fazlina Iryani Abdul Hamid, Mohamad Khatim Hasan
Abstract:
Numbers of software quality measurement system have been implemented over the past few years, but none of them focuses on telecommunication industry. Software quality measurement system for telecommunication industry was a system that could calculate the quality value of the measured software that totally focused in telecommunication industry. Before designing a system, quality factors, quality attributes and quality metrics were identified based on literature review and survey. Then, using the identified quality factors, quality attributes and quality metrics, quality model for telecommunication industry was constructed. Each identified quality metrics had its own formula. Quality value for the system was measured based on the quality metrics and aggregated by referring to the quality model. It would classify the quality level of the software based on Net Satisfaction Index (NSI). The system was designed using object-oriented approach in web-based environment. Thus, existing of software quality measurement system was important to both developers and users in order to produce high quality software product for telecommunication industry.
Keywords: Software Quality, Quality Measurement, Object-oriented Approach, Net satisfaction Index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24518260 Study of Unsteady Swirling Flow in a Hydrodynamic Vortex Chamber
Authors: Sergey I. Shtork, Aleksey P. Vinokurov, Sergey V. Alekseenko
Abstract:
The paper reports on the results of experimental and numerical study of nonstationary swirling flow in an isothermal model of vortex burner. It has been identified that main source of the instability is related to a precessing vortex core (PVC) phenomenon. The PVC induced flow pulsation characteristics such as precession frequency and its variation as a function of flowrate and swirl number have been explored making use of acoustic probes. Additionally pressure transducers were used to measure the pressure drops on the working chamber and across the vortex flow. The experiments have been included also the mean velocity measurements making use of a laser-Doppler anemometry. The features of instantaneous flowfield generated by the PVC were analyzed employing a commercial CFD code (Star-CCM+) based on Detached Eddy Simulation (DES) approach. Validity of the numerical code has been checked by comparison calculated flowfield data with the obtained experimental results. It has been confirmed particularly that the CFD code applied correctly reproduces the flow features.Keywords: Acoustic probes, detached eddy simulation (DES), laser-Doppler anemometry (LDA), precessing vortex core (PVC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22718259 Design and Analysis of Annular Combustion Chamber for a Micro Turbojet Engine
Authors: Rashid Slaheldinn Elhaj Mohammed
Abstract:
The design of high performance combustion chambers for turbojet engines is considered as one of the most challenges that face gas turbine designers, since the design approach depends on empirical correlations of data derived from the previous design experiences. The objective of this paper is to design a combustion chamber that suits the requirements of a micro-turbojet engine with 400 N output thrust and operates with kerosene as fuel. In this paper, only preliminary calculations related to the annular type of combustion chamber are explained in details. These calculations will cover the evaluation of reference quantities, calculation of required dimensions, calculation of air distribution and pressure drop, estimation of number and diameters for air admission holes, as well as aerodynamic considerations. The design process is then accompanied by analytical procedure using commercial CFD ANALYSIS tool; ANSYS 16 CFX software. After conducting CFD analysis, the design process will be then iterated in order to gain satisfactory results. It should be noted that the design of the fuel preparation and installation systems is beyond the scope of this work, and it will be discussed separately in another work.Keywords: Annular combustion chamber, micro-turbojet engine, CFD ANALYSIS, pressure drop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20958258 Dynamics of Mini Hydraulic Backhoe Excavator: A Lagrange-Euler (L-E) Approach
Authors: Bhaveshkumar P. Patel, J. M. Prajapati
Abstract:
Excavators are high power machines used in the mining, agricultural and construction industry whose principal functions are digging (material removing), ground leveling and material transport operations. During the digging task there are certain unknown forces exerted by the bucket on the soil and the digging operation is repetitive in nature. Automation of the digging task can be performed by an automatically controlled excavator system, which is not only control the forces but also follow the planned digging trajectories. To develop such a controller for automated excavation, it is required to develop a dynamic model to describe the behavior of the control system during digging operation and motion of excavator with time. The presented work described a dynamic model needed for controller design and which is derived by applying Lagrange-Euler approach. The developed dynamic model is intended for further development of an automated excavation control system for light duty construction work and can be applied for heavy duty or all types of backhoe excavators.
Keywords: Backhoe excavator, controller, digging, excavation, trajectory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 44558257 STLF Based on Optimized Neural Network Using PSO
Authors: H. Shayeghi, H. A. Shayanfar, G. Azimi
Abstract:
The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Keywords: Large Neural Network, Short-Term Load Forecasting, Particle Swarm Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22248256 Dynamic Correlations and Portfolio Optimization between Islamic and Conventional Equity Indexes: A Vine Copula-Based Approach
Authors: Imen Dhaou
Abstract:
This study examines conditional Value at Risk by applying the GJR-EVT-Copula model, and finds the optimal portfolio for eight Dow Jones Islamic-conventional pairs. Our methodology consists of modeling the data by a bivariate GJR-GARCH model in which we extract the filtered residuals and then apply the Peak over threshold model (POT) to fit the residual tails in order to model marginal distributions. After that, we use pair-copula to find the optimal portfolio risk dependence structure. Finally, with Monte Carlo simulations, we estimate the Value at Risk (VaR) and the conditional Value at Risk (CVaR). The empirical results show the VaR and CVaR values for an equally weighted portfolio of Dow Jones Islamic-conventional pairs. In sum, we found that the optimal investment focuses on Islamic-conventional US Market index pairs because of high investment proportion; however, all other index pairs have low investment proportion. These results deliver some real repercussions for portfolio managers and policymakers concerning to optimal asset allocations, portfolio risk management and the diversification advantages of these markets.
Keywords: CVaR, Dow Jones Islamic index, GJR-GARCH-EVT-pair copula, portfolio optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9968255 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes
Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani
Abstract:
Development of a method to estimate gene functions is an important task in bioinformatics. One of the approaches for the annotation is the identification of the metabolic pathway that genes are involved in. Since gene expression data reflect various intracellular phenomena, those data are considered to be related with genes’ functions. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.
Keywords: Metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2336