Search results for: Data cutting and sorting method
12815 Blockchain Security in MANETs
Authors: Nada Mouchfiq, Ahmed Habbani, Chaimae Benjbara
Abstract:
The security aspect of the IoT occupies a place of great importance especially after the evolution that has known this field lastly because it must take into account the transformations and the new applications .Blockchain is a new technology dedicated to the data sharing. However, this does not work the same way in the different systems with different operating principles. This article will discuss network security using the Blockchain to facilitate the sending of messages and information, enabling the use of new processes and enabling autonomous coordination of devices. To do this, we will discuss proposed solutions to ensure a high level of security in these networks in the work of other researchers. Finally, our article will propose a method of security more adapted to our needs as a team working in the ad hoc networks, this method is based on the principle of the Blockchain and that we named ”MPR Blockchain”.Keywords: Ad hoc networks, blockchain, MPR, security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91812814 Poverty Measurement by Islamic Institutions
Authors: Mohamed Saladin Abdul Rasool, Arifin Md Salleh, Mohd Fauzi Mohd Harun
Abstract:
Islamic institutions in Malaysia play a variety of socioeconomic roles such as poverty alleviation. To perform this role, these institutions face a major task in identifying the poverty group. Most of these institutions measure and operationalize poverty from the monetary perspective using variables such as income, expenditure or consumption. In practice, most Islamic institutions in Malaysia use the monetary approach in measuring poverty through the conventional Poverty Line Income (PLI) method and recently, the had al kifayah (HAK) method using total necessities of a household from an Islamic perspective. The objective of this paper is to present the PLI and also the HAK method. This micro-data study would highlight the similarities and differences of both the methods.A survey aided by a structured questionnaire was carried out on 260 selected head of households in the state of Selangor. The paper highlights several demographic factors that are associated with the three monetary indicators in the study, namely income, PLI and HAK. In addition, the study found that these monetary variables are significantly related with each other.Keywords: Poverty line, multidimensional, necessities, monetary
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 244712813 Energy Consumption and Surface Finish Analysis of Machining Ti6Al4V
Authors: Salman Pervaiz, Ibrahim Deiab, Amir Rashid, Mihai Nicolescu, Hossam Kishawy
Abstract:
Greenhouse gases (GHG) emissions impose major threat to global warming potential (GWP). Unfortunately manufacturing sector is one of the major sources that contribute towards the rapid increase in greenhouse gases (GHG) emissions. In manufacturing sector electric power consumption is the major driver that influences CO2 emission. Titanium alloys are widely utilized in aerospace, automotive and petrochemical sectors because of their high strength to weight ratio and corrosion resistance. Titanium alloys are termed as difficult to cut materials because of their poor machinability rating. The present study analyzes energy consumption during cutting with reference to material removal rate (MRR). Surface roughness was also measured in order to optimize energy consumption.Keywords: Energy Consumption, CO2 Emission, Ti6Al4V.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 278412812 Manifold Analysis by Topologically Constrained Isometric Embedding
Authors: Guy Rosman, Alexander M. Bronstein, Michael M. Bronstein, Ron Kimmel
Abstract:
We present a new algorithm for nonlinear dimensionality reduction that consistently uses global information, and that enables understanding the intrinsic geometry of non-convex manifolds. Compared to methods that consider only local information, our method appears to be more robust to noise. Unlike most methods that incorporate global information, the proposed approach automatically handles non-convexity of the data manifold. We demonstrate the performance of our algorithm and compare it to state-of-the-art methods on synthetic as well as real data.
Keywords: Dimensionality reduction, manifold learning, multidimensional scaling, geodesic distance, boundary detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 145412811 A Fully Implicit Finite-Difference Solution to One Dimensional Coupled Nonlinear Burgers’ Equations
Authors: Vineet K. Srivastava, Mukesh K. Awasthi, Mohammad Tamsir
Abstract:
A fully implicit finite-difference method has been proposed for the numerical solutions of one dimensional coupled nonlinear Burgers’ equations on the uniform mesh points. The method forms a system of nonlinear difference equations which is to be solved at each iteration. Newton’s iterative method has been implemented to solve this nonlinear assembled system of equations. The linear system has been solved by Gauss elimination method with partial pivoting algorithm at each iteration of Newton’s method. Three test examples have been carried out to illustrate the accuracy of the method. Computed solutions obtained by proposed scheme have been compared with analytical solutions and those already available in the literature by finding L2 and L∞ errors.
Keywords: Burgers’ equation, Implicit Finite-difference method, Newton’s method, Gauss elimination with partial pivoting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594312810 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data
Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad
Abstract:
Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars, and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.Keywords: Remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 205312809 On a Pitch Duration Technique for Prosody Control
Authors: JongKuk Kim, HernSoo Hahn, Uei-Joong Yoo, MyungJin Bae
Abstract:
In this paper, we propose a method of alter duration in frequency domain that control prosody in real time after pitch alteration. If there has a method to alteration duration freely among prosody information, that may used in several fields such as speech impediment person's pronunciation proof reading or language study. The pitch alteration method used control prosody altered by PSOLA synthesis method which is in time domain processing method. However, the duration of pitch alteration speech is changed by the frequency domain. In this paper, we altered the duration with the method of duration alteration by Fast Fourier Transformation in frequency domain. Consequently, the intelligibility of the pitch and duration are controlled has a slight decrease than the case when only pitch is changed, but the proposed algorithm obtained the higher MOS score about naturalness.Keywords: PSOLA, Pitch Alteration, Duration Control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168512808 Stakeholder Analysis of Agricultural Drone Policy: A Case Study of the Agricultural Drone Ecosystem of Thailand
Authors: Thanomsin Chakreeves, Atichat Preittigun, Ajchara Phu-ang
Abstract:
This paper presents a stakeholder analysis of agricultural drone policies that meet the government's goal of building an agricultural drone ecosystem in Thailand. Firstly, case studies from other countries are reviewed. The stakeholder analysis method and qualitative data from the interviews are then presented including data from the Institute of Innovation and Management, the Office of National Higher Education Science Research and Innovation Policy Council, agricultural entrepreneurs and farmers. Study and interview data are then employed to describe the current ecosystem and to guide the implementation of agricultural drone policies that are suitable for the ecosystem of Thailand. Finally, policy recommendations are then made that the Thai government should adopt in the future.
Keywords: Drone public policy, drone ecosystem, policy development, agricultural drone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 80612807 On the Solution of Fully Fuzzy Linear Systems
Authors: Hsuan-Ku Liu
Abstract:
A linear system is called a fully fuzzy linear system (FFLS) if quantities in this system are all fuzzy numbers. For the FFLS, we investigate its solution and develop a new approximate method for solving the FFLS. Observing the numerical results, we find that our method is accurate than the iterative Jacobi and Gauss- Seidel methods on approximating the solution of FFLS.
Keywords: Fully fuzzy linear equations, iterative method, homotopy perturbation method, approximate solutions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174612806 Continuity of Defuzzification and Its Application to Fuzzy Control
Authors: Takashi Mitsuishi, Kiyoshi Sawada, Yasunari Shidama
Abstract:
The mathematical framework for studying of a fuzzy approximate reasoning is presented in this paper. Two important defuzzification methods (Area defuzzification and Height defuzzification) besides the center of gravity method which is the best well known defuzzification method are described. The continuity of the defuzzification methods and its application to a fuzzy feedback control are discussed.
Keywords: Fuzzy approximate reasoning, defuzzification, area method, height method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167612805 Analysis of the Physical Behavior of Library Users in Reading Rooms through GIS: A Case Study of the Central Library of Tehran University
Authors: R. Pournaghi
Abstract:
Taking into account the significance of measuring the daily use of the study space in the libraries in order to develop and reorganize the space for enhancing the efficiency of the study space, the current study aimed to apply GIS in analyzing the study halls of the Central Library and Document Center of Tehran University in order to determine how study desks and chairs were used by the students. The study used a combination of survey-descriptive and system design method. In order to gather the required data, surveydescriptive method was used. For implementing and entering data into ArcGIS and analyzing the data and displaying the results on the maps of the study halls of the library, system design method was utilized. The design of the spatial database of the use of the study halls was measured through the extent of occupancy of the space by the library users and the maps of the study halls of the central library of Tehran University as the case study. The results showed that Abooreyhan hall had the highest rate of occupancy of the desks and chairs compared to the other halls. The Hall of Science and Technology, with an average occupancy rate of 0.39 for the tables represented the lowest number of users and Rashid al-Dins hall, and Science and Technology hall with an average occupancy rate (0.40) had the lowest number of users for seats. In this study, the comparison of the space occupied at different periods in the morning, evenings, afternoons, and several months was performed through GIS. This system analyzed the space relationships effectively and efficiently. The output of this study would be used by administrators and librarians to determine the exact extent of use of the equipment of the study halls and librarians can use the output map to design the space more efficiently at the library.
Keywords: Geospatial Information System, Spatial analysis, Reading Room, Academic libraries, Library’s User, Central Library of Tehran University.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181312804 Retrospective Reconstruction of Time Series Data for Integrated Waste Management
Authors: A. Buruzs, M. F. Hatwágner, A. Torma, L. T. Kóczy
Abstract:
The development, operation and maintenance of Integrated Waste Management Systems (IWMS) affects essentially the sustainable concern of every region. The features of such systems have great influence on all of the components of sustainability. In order to reach the optimal way of processes, a comprehensive mapping of the variables affecting the future efficiency of the system is needed such as analysis of the interconnections among the components and modeling of their interactions. The planning of a IWMS is based fundamentally on technical and economical opportunities and the legal framework. Modeling the sustainability and operation effectiveness of a certain IWMS is not in the scope of the present research. The complexity of the systems and the large number of the variables require the utilization of a complex approach to model the outcomes and future risks. This complex method should be able to evaluate the logical framework of the factors composing the system and the interconnections between them. The authors of this paper studied the usability of the Fuzzy Cognitive Map (FCM) approach modeling the future operation of IWMS’s. The approach requires two input data set. One is the connection matrix containing all the factors affecting the system in focus with all the interconnections. The other input data set is the time series, a retrospective reconstruction of the weights and roles of the factors. This paper introduces a novel method to develop time series by content analysis.
Keywords: Content analysis, factors, integrated waste management system, time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 201812803 Microscopic Emission and Fuel Consumption Modeling for Light-duty Vehicles Using Portable Emission Measurement System Data
Authors: Wei Lei, Hui Chen, Lin Lu
Abstract:
Microscopic emission and fuel consumption models have been widely recognized as an effective method to quantify real traffic emission and energy consumption when they are applied with microscopic traffic simulation models. This paper presents a framework for developing the Microscopic Emission (HC, CO, NOx, and CO2) and Fuel consumption (MEF) models for light-duty vehicles. The variable of composite acceleration is introduced into the MEF model with the purpose of capturing the effects of historical accelerations interacting with current speed on emission and fuel consumption. The MEF model is calibrated by multivariate least-squares method for two types of light-duty vehicle using on-board data collected in Beijing, China by a Portable Emission Measurement System (PEMS). The instantaneous validation results shows the MEF model performs better with lower Mean Absolute Percentage Error (MAPE) compared to other two models. Moreover, the aggregate validation results tells the MEF model produces reasonable estimations compared to actual measurements with prediction errors within 12%, 10%, 19%, and 9% for HC, CO, NOx emissions and fuel consumption, respectively.Keywords: Emission, Fuel consumption, Light-duty vehicle, Microscopic, Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200512802 Convergence and Comparison Theorems of the Modified Gauss-Seidel Method
Authors: Zhouji Chen
Abstract:
In this paper, the modified Gauss-Seidel method with the new preconditioner for solving the linear system Ax = b, where A is a nonsingular M-matrix with unit diagonal, is considered. The convergence property and the comparison theorems of the proposed method are established. Two examples are given to show the efficiency and effectiveness of the modified Gauss-Seidel method with the presented new preconditioner.
Keywords: Preconditioned linear system, M-matrix, Convergence, Comparison theorem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150412801 Evaluation of Model Evaluation Criterion for Software Development Effort Estimation
Authors: S. K. Pillai, M. K. Jeyakumar
Abstract:
Estimation of model parameters is necessary to predict the behavior of a system. Model parameters are estimated using optimization criteria. Most algorithms use historical data to estimate model parameters. The known target values (actual) and the output produced by the model are compared. The differences between the two form the basis to estimate the parameters. In order to compare different models developed using the same data different criteria are used. The data obtained for short scale projects are used here. We consider software effort estimation problem using radial basis function network. The accuracy comparison is made using various existing criteria for one and two predictors. Then, we propose a new criterion based on linear least squares for evaluation and compared the results of one and two predictors. We have considered another data set and evaluated prediction accuracy using the new criterion. The new criterion is easy to comprehend compared to single statistic. Although software effort estimation is considered, this method is applicable for any modeling and prediction.
Keywords: Software effort estimation, accuracy, Radial Basis Function, linear least squares.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 204112800 Method Development and Validation for the Determination of Cefixime in Pure and Commercial Dosage Forms by Specrophotometry
Authors: S. N. H. Azmi, B. Iqbal, J. K. Al Mamari, K. A. Al Hattali, W. N. Al Hadhrami
Abstract:
A simple, accurate and precise direct spectrophotometric method has been developed for the determination of cefixime in tablets and capsules. The method is based on the reaction of cefixime with a mixture of potassium iodide and potassium iodate to form yellow coloured product in ethanol-distilled water medium at room temperature which absorbed maximally at 352 nm. The factors affecting the reaction product were carefully studied and optimized. The validation parameters based on International Conference on Harmonisation (ICH, USA) guidelines were followed. The effect of common excipients used as additives has been tested and the tolerance limit was calculated for the determination of cefixime. Beer’s law is obeyed in the concentration range of 4 – 24 ug mL-1 with apparent molar absorptivity of 1.52 × 104 L mol-1cm-1 and Sandell’s sensitivity of 0.033 ug/cm2/ 0.001 absorbance unit. The limits of detection and quantitation for the proposed method are 0.32 and 1.06 ug mL-1, respectively. The proposed method has been successfully applied for the determination of cefixime in pharmaceutical formulations. The results obtained by the proposed method were statistically compared with the reference method using t- and F- values and found no significant difference between the two methods. The proposed method can be used as an alternate method for routine quality control analysis of cefixime in pharmaceutical formulations.
Keywords: Spectrophotometry, cefixime, validation, pharmaceutical formulations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 316312799 A Semi-Implicit Phase Field Model for Droplet Evolution
Authors: M. H. Kazemi, D. Salac
Abstract:
A semi-implicit phase field method for droplet evolution is proposed. Using the phase field Cahn-Hilliard equation, we are able to track the interface in multiphase flow. The idea of a semi-implicit finite difference scheme is reviewed and employed to solve two nonlinear equations, including the Navier-Stokes and the Cahn-Hilliard equations. The use of a semi-implicit method allows us to have larger time steps compared to explicit schemes. The governing equations are coupled and then solved by a GMRES solver (generalized minimal residual method) using modified Gram-Schmidt orthogonalization. To show the validity of the method, we apply the method to the simulation of a rising droplet, a leaky dielectric drop and the coalescence of drops. The numerical solutions to the phase field model match well with existing solutions over a defined range of variables.
Keywords: Coalescence, leaky dielectric, numerical method, phase field, rising droplet, semi-implicit method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87712798 An Inverse Approach for Determining Creep Properties from a Miniature Thin Plate Specimen under Bending
Abstract:
This paper describes a new approach which can be used to interpret the experimental creep deformation data obtained from miniaturized thin plate bending specimen test to the corresponding uniaxial data based on an inversed application of the reference stress method. The geometry of the thin plate is fully defined by the span of the support, l, the width, b, and the thickness, d. Firstly, analytical solutions for the steady-state, load-line creep deformation rate of the thin plates for a Norton’s power law under plane stress (b→0) and plane strain (b→∞) conditions were obtained, from which it can be seen that the load-line deformation rate of the thin plate under plane-stress conditions is much higher than that under the plane-strain conditions. Since analytical solution is not available for the plates with random b-values, finite element (FE) analyses are used to obtain the solutions. Based on the FE results obtained for various b/l ratios and creep exponent, n, as well as the analytical solutions under plane stress and plane strain conditions, an approximate, numerical solutions for the deformation rate are obtained by curve fitting. Using these solutions, a reference stress method is utilised to establish the conversion relationships between the applied load and the equivalent uniaxial stress and between the creep deformations of thin plate and the equivalent uniaxial creep strains. Finally, the accuracy of the empirical solution was assessed by using a set of “theoretical” experimental data.Keywords: Bending, Creep, Miniature Specimen, Thin Plate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 191312797 Study on the Impact of Size and Position of the Shear Field in Determining the Shear Modulus of Glulam Beam Using Photogrammetry Approach
Authors: Niaz Gharavi, Hexin Zhang
Abstract:
The shear modulus of a timber beam can be determined using torsion test or shear field test method. The shear field test method is based on shear distortion measurement of the beam at the zone with the constant transverse load in the standardized four-point bending test. The current code of practice advises using two metallic arms act as an instrument to measure the diagonal displacement of the constructing square. The size and the position of the constructing square might influence the shear modulus determination. This study aimed to investigate the size and the position effect of the square in the shear field test method. A binocular stereo vision system has been employed to determine the 3D displacement of a grid of target points. Six glue laminated beams were produced and tested. Analysis of Variance (ANOVA) was performed on the acquired data to evaluate the significance of the size effect and the position effect of the square. The results have shown that the size of the square has a noticeable influence on the value of shear modulus, while, the position of the square within the area with the constant shear force does not affect the measured mean shear modulus.Keywords: Shear field test method, structural-sized test, shear modulus of Glulam beam, photogrammetry approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 100812796 Summarizing Data Sets for Data Mining by Using Statistical Methods in Coastal Engineering
Authors: Yunus Doğan, Ahmet Durap
Abstract:
Coastal regions are the one of the most commonly used places by the natural balance and the growing population. In coastal engineering, the most valuable data is wave behaviors. The amount of this data becomes very big because of observations that take place for periods of hours, days and months. In this study, some statistical methods such as the wave spectrum analysis methods and the standard statistical methods have been used. The goal of this study is the discovery profiles of the different coast areas by using these statistical methods, and thus, obtaining an instance based data set from the big data to analysis by using data mining algorithms. In the experimental studies, the six sample data sets about the wave behaviors obtained by 20 minutes of observations from Mersin Bay in Turkey and converted to an instance based form, while different clustering techniques in data mining algorithms were used to discover similar coastal places. Moreover, this study discusses that this summarization approach can be used in other branches collecting big data such as medicine.
Keywords: Clustering algorithms, coastal engineering, data mining, data summarization, statistical methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 124412795 STLF Based on Optimized Neural Network Using PSO
Authors: H. Shayeghi, H. A. Shayanfar, G. Azimi
Abstract:
The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Keywords: Large Neural Network, Short-Term Load Forecasting, Particle Swarm Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 222412794 Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit
Authors: Ahmed Elrewainy
Abstract:
Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The existing pure materials “endmembers” in the scene share the spectra pixels with different amounts called “abundances”. Unmixing of the data cube is an important task to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The l1-norm optimization problem “basis pursuit” could be used as a sparsity-based approach to solve this unmixing problem where the endmembers is assumed to be sparse in an appropriate domain known as dictionary. This optimization problem is solved using proximal method “iterative thresholding”. The l1-norm basis pursuit optimization problem as a sparsity-based unmixing technique was used to unmix real and synthetic hyperspectral data cubes.
Keywords: Basis pursuit, blind source separation, hyperspectral imaging, spectral unmixing, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83712793 Dimensional Modeling of HIV Data Using Open Source
Authors: Charles D. Otine, Samuel B. Kucel, Lena Trojer
Abstract:
Selecting the data modeling technique for an information system is determined by the objective of the resultant data model. Dimensional modeling is the preferred modeling technique for data destined for data warehouses and data mining, presenting data models that ease analysis and queries which are in contrast with entity relationship modeling. The establishment of data warehouses as components of information system landscapes in many organizations has subsequently led to the development of dimensional modeling. This has been significantly more developed and reported for the commercial database management systems as compared to the open sources thereby making it less affordable for those in resource constrained settings. This paper presents dimensional modeling of HIV patient information using open source modeling tools. It aims to take advantage of the fact that the most affected regions by the HIV virus are also heavily resource constrained (sub-Saharan Africa) whereas having large quantities of HIV data. Two HIV data source systems were studied to identify appropriate dimensions and facts these were then modeled using two open source dimensional modeling tools. Use of open source would reduce the software costs for dimensional modeling and in turn make data warehousing and data mining more feasible even for those in resource constrained settings but with data available.Keywords: About Database, Data Mining, Data warehouse, Dimensional Modeling, Open Source.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 195912792 Robotic Hands: Design Review and Proposal of New Design Process
Authors: Jimmy W. Soto Martell, Giuseppina Gini
Abstract:
In this paper we intend to ascertain the state of the art on multifingered end-effectors, also known as robotic hands or dexterous robot hands, and propose an experimental setup for an innovative task based design approach, involving cutting edge technologies in motion capture. After an initial description of the capabilities and complexity of a human hand when grasping objects, in order to point out the importance of replicating it, we analyze the mechanical and kinematical structure of some important works carried out all around the world in the last three decades and also review the actuators and sensing technologies used. Finally we describe a new design philosophy proposing an experimental setup for the first stage using recent developments in human body motion capture systems that might lead to lighter and always more dexterous robotic hands.
Keywords: Dexterous manipulation, grasp, multifingered endeffector, robotic hand.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 394112791 Electromagnetic Imaging of Inhomogeneous Dielectric Cylinders Buried in a Slab Mediumby TE Wave Illumination
Authors: Chung-Hsin Huang, Chien-Ching Chiu, Chun Jen Lin
Abstract:
The electromagnetic imaging of inhomogeneous dielectric cylinders buried in a slab medium by transverse electric (TE) wave illumination is investigated. Dielectric cylinders of unknown permittivities are buried in second space and scattered a group of unrelated waves incident from first space where the scattered field is recorded. By proper arrangement of the various unrelated incident fields, the difficulties of ill-posedness and nonlinearity are circumvented, and the permittivity distribution can be reconstructed through simple matrix operations. The algorithm is based on the moment method and the unrelated illumination method. Numerical results are given to demonstrate the capability of the inverse algorithm. Good reconstruction is obtained even in the presence of additive Gaussian random noise in measured data. In addition, the effect of noise on the reconstruction result is also investigated.Keywords: Slab Medium, Unrelated Illumination Method, TEWave Illumination, Inhomogeneous Cylinders.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120412790 The BGMRES Method for Generalized Sylvester Matrix Equation AXB − X = C and Preconditioning
Authors: Azita Tajaddini, Ramleh Shamsi
Abstract:
In this paper, we present the block generalized minimal residual (BGMRES) method in order to solve the generalized Sylvester matrix equation. However, this method may not be converged in some problems. We construct a polynomial preconditioner based on BGMRES which shows why polynomial preconditioner is superior to some block solvers. Finally, numerical experiments report the effectiveness of this method.Keywords: Linear matrix equation, Block GMRES, matrix Krylov subspace, polynomial preconditioner.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87412789 Experimental Investigation on Residual Stresses in Welded Medium-Walled I-shaped Sections Fabricated from Q460GJ Structural Steel Plates
Authors: Qian Zhu, Shidong Nie, Bo Yang, Gang Xiong, Guoxin Dai
Abstract:
GJ steel is a new type of high-performance structural steel which has been increasingly adopted in practical engineering. Q460GJ structural steel has a nominal yield strength of 460 MPa, which does not decrease significantly with the increase of steel plate thickness like normal structural steel. Thus, Q460GJ structural steel is normally used in medium-walled welded sections. However, research works on the residual stress in GJ steel members are few though it is one of the vital factors that can affect the member and structural behavior. This article aims to investigate the residual stresses in welded I-shaped sections fabricated from Q460GJ structural steel plates by experimental tests. A total of four full scale welded medium-walled I-shaped sections were tested by sectioning method. Both circular curve correction method and straightening measurement method were adopted in this study to obtain the final magnitude and distribution of the longitudinal residual stresses. In addition, this paper also explores the interaction between flanges and webs. And based on the statistical evaluation of the experimental data, a multilayer residual stress model is proposed.
Keywords: Q460GJ structural steel, residual stresses, sectioning method, Welded medium-walled I-shaped sections.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 105612788 A Distance Function for Data with Missing Values and Its Application
Authors: Loai AbdAllah, Ilan Shimshoni
Abstract:
Missing values in data are common in real world applications. Since the performance of many data mining algorithms depend critically on it being given a good metric over the input space, we decided in this paper to define a distance function for unlabeled datasets with missing values. We use the Bhattacharyya distance, which measures the similarity of two probability distributions, to define our new distance function. According to this distance, the distance between two points without missing attributes values is simply the Mahalanobis distance. When on the other hand there is a missing value of one of the coordinates, the distance is computed according to the distribution of the missing coordinate. Our distance is general and can be used as part of any algorithm that computes the distance between data points. Because its performance depends strongly on the chosen distance measure, we opted for the k nearest neighbor classifier to evaluate its ability to accurately reflect object similarity. We experimented on standard numerical datasets from the UCI repository from different fields. On these datasets we simulated missing values and compared the performance of the kNN classifier using our distance to other three basic methods. Our experiments show that kNN using our distance function outperforms the kNN using other methods. Moreover, the runtime performance of our method is only slightly higher than the other methods.
Keywords: Missing values, Distance metric, Bhattacharyya distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 275112787 The Problem of Reconciling the Principle of Confidentiality in Foreign Investment Arbitration with the Public Interest
Authors: Bárbara Magalhães Bravo, Cláudia Figueiras
Abstract:
The economical globalization through the liberalization of the markets and capitals boosted the economical development of the nations and the needs for sorting out the disputes arising from the foreign investment. The arbitration, for all the inherent advantages, such as swiftness, arbitrators’ specialise skills and impartiality sets a pacifier tool for the interest in account. Safeguarded the public interest, we face the problem of the confidentiality in the arbitration. The urgent development of impelling mechanisms concerning transparency, guaranty and protection of the interest in account, reveals itself urgent. Through a bibliography review, we will dense the state of art, by going through the several solutions concerning, and pointing out the most suitable. Through the jurisprudential analysis we will point out the solution for the conflict confidentiality/public interest. The transparency, inextricable from the public interest, imposes the arbitration process can be open to all citizens. Transparency rules have been considered at the UNCITRAL in attempting to conciliate the necessity of publicity and the public interest, however still insufficient. The arbitration of foreign investment carries consequences to the citizens of the State. Articulating mechanisms between the arbitral procedures secrecy and the public interest should be adopted. The arbitration of foreign investment, being a tertius genius between the international arbitration and the administrative arbitration would claim its own regulation in each and every States where the confidentiality rules and its exceptions could be identified. One should enquiry where the limit of the citizens’ individual rights protection and the public interest should give way to the principle of transparency
Keywords: Arbitration, foreign investment, transparency, confidentiality, international centre for settlement of investment disputes UNCITRAL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 76612786 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling
Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra
Abstract:
Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.
Keywords: Multi-temporal satellite image, urban growth, Non-stationarity, stochastic modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504