Search results for: Software Complexity
1666 Evaluating the Understanding of the University Students (Basic Sciences and Engineering) about the Numerical Representation of the Average Rate of Change
Authors: Saeid Haghjoo, Ebrahim Reyhani, Fahimeh Kolahdouz
Abstract:
The present study aimed to evaluate the understanding of the students in Tehran universities (Iran) about the numerical representation of the average rate of change based on the Structure of Observed Learning Outcomes (SOLO). In the present descriptive-survey research, the statistical population included undergraduate students (basic sciences and engineering) in the universities of Tehran. The samples were 604 students selected by random multi-stage clustering. The measurement tool was a task whose face and content validity was confirmed by math and mathematics education professors. Using Cronbach's Alpha criterion, the reliability coefficient of the task was obtained 0.95, which verified its reliability. The collected data were analyzed by descriptive statistics and inferential statistics (chi-squared and independent t-tests) under SPSS-24 software. According to the SOLO model in the prestructural, unistructural, and multistructural levels, basic science students had a higher percentage of understanding than that of engineering students, although the outcome was inverse at the relational level. However, there was no significant difference in the average understanding of both groups. The results indicated that students failed to have a proper understanding of the numerical representation of the average rate of change, in addition to missconceptions when using physics formulas in solving the problem. In addition, multiple solutions were derived along with their dominant methods during the qualitative analysis. The current research proposed to focus on the context problems with approximate calculations and numerical representation, using software and connection common relations between math and physics in the teaching process of teachers and professors.
Keywords: Average rate of change, context problems, derivative, numerical representation, SOLO taxonomy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7611665 Greek Compounds: A Challenging Case for the Parsing Techniques of PC-KIMMO v.2
Authors: Angela Ralli, Eleni Galiotou
Abstract:
In this paper we describe the recognition process of Greek compound words using the PC-KIMMO software. We try to show certain limitations of the system with respect to the principles of compound formation in Greek. Moreover, we discuss the computational processing of phenomena such as stress and syllabification which are indispensable for the analysis of such constructions and we try to propose linguistically-acceptable solutions within the particular system.
Keywords: Morpho-phonological parsing, compound words, two-level morphology, natural language processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16091664 Outer-Brace Stress Concentration Factors of Offshore Two-Planar Tubular DKT-Joints
Authors: Mohammad Ali Lotfollahi-Yaghin, Hamid Ahmadi
Abstract:
In the present paper, a set of parametric FE stress analyses is carried out for two-planar welded tubular DKT-joints under two different axial load cases. Analysis results are used to present general remarks on the effect of geometrical parameters on the stress concentration factors (SCFs) at the inner saddle, outer saddle, toe, and heel positions on the main (outer) brace. Then a new set of SCF parametric equations is developed through nonlinear regression analysis for the fatigue design of two-planar DKT-joints. An assessment study of these equations is conducted against the experimental data; and the satisfaction of the criteria regarding the acceptance of parametric equations is checked. Significant effort has been devoted by researchers to the study of SCFs in various uniplanar tubular connections. Nevertheless, for multi-planar joints covering the majority of practical applications, very few investigations have been reported due to the complexity and high cost involved.Keywords: Offshore jacket structure, Parametric equation, Stress concentration factor (SCF), Two-planar tubular KT-joint
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28171663 Mathematical Approach for Large Deformation Analysis of the Stiffened Coupled Shear Walls
Authors: M. J. Fadaee, H. Saffari, H. Khosravi
Abstract:
Shear walls are used in most of the tall buildings for carrying the lateral load. When openings for doors or windows are necessary to be existed in the shear walls, a special type of the shear walls is used called "coupled shear walls" which in some cases is stiffened by specific beams and so, called "stiffened coupled shear walls". In this paper, a mathematical method for geometrically nonlinear analysis of the stiffened coupled shear walls has been presented. Then, a suitable formulation for determining the critical load of the stiffened coupled shear walls under gravity force has been proposed. The governing differential equations for equilibrium and deformation of the stiffened coupled shear walls have been obtained by setting up the equilibrium equations and the moment-curvature relationships for each wall. Because of the complexity of the differential equation, the energy method has been adopted for approximate solution of the equations.Keywords: Buckling load, differential equation, energy method, geometrically nonlinear analysis, mathematical method, Stiffened coupled shear walls.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16401662 Physical Verification Flow on Multiple Foundries
Authors: R. Abdul Wahab, R. Mohd Fuad Tengku Aziz, N. Othman, S. Saleh, N. Razali, M. Al Baqir Zinal Abidin, M. Hanif Md Nasir
Abstract:
This paper will discuss how we optimize our physical verification flow in our IC Design Department having various rule decks from multiple foundries. Our ultimate goal is to achieve faster time to tape-out and avoid schedule delay. Currently the physical verification runtimes and memory usage have drastically increased with the increasing number of design rules, design complexity, and the size of the chips to be verified. To manage design violations, we use a number of solutions to reduce the amount of violations needed to be checked by physical verification engineers. The most important functions in physical verifications are DRC (design rule check), LVS (layout vs. schematic), and XRC (extraction). Since we have a multiple number of foundries for our design tape-outs, we need a flow that improve the overall turnaround time and ease of use of the physical verification process. The demand for fast turnaround time is even more critical since the physical design is the last stage before sending the layout to the foundries.Keywords: Physical verification, DRC, LVS, XRC, flow, foundry, runset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32301661 High Securing Cover-File of Hidden Data Using Statistical Technique and AES Encryption Algorithm
Authors: A. A. Zaidan, Anas Majeed, B. B. Zaidan
Abstract:
Nowadays, the rapid development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threatens. It-s a big security and privacy issue with the large flood of information and the development of the digital format, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. Nowadays protection system classified with more specific as hiding information, encryption information, and combination between hiding and encryption to increase information security, the strength of the information hiding science is due to the non-existence of standard algorithms to be used in hiding secret messages. Also there is randomness in hiding methods such as combining several media (covers) with different methods to pass a secret message. In addition, there are no formal methods to be followed to discover the hidden data. For this reason, the task of this research becomes difficult. In this paper, a new system of information hiding is presented. The proposed system aim to hidden information (data file) in any execution file (EXE) and to detect the hidden file and we will see implementation of steganography system which embeds information in an execution file. (EXE) files have been investigated. The system tries to find a solution to the size of the cover file and making it undetectable by anti-virus software. The system includes two main functions; first is the hiding of the information in a Portable Executable File (EXE), through the execution of four process (specify the cover file, specify the information file, encryption of the information, and hiding the information) and the second function is the extraction of the hiding information through three process (specify the steno file, extract the information, and decryption of the information). The system has achieved the main goals, such as make the relation of the size of the cover file and the size of information independent and the result file does not make any conflict with anti-virus software.Keywords: Cryptography, Steganography, Portable ExecutableFile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18021660 Simulation of Agri-Food Supply Chains
Authors: Sherine Beshara, Khaled S. El-Kilany, Noha M. Galal
Abstract:
Supply chain management has become more challenging with the emerging trend of globalization and sustainability. Lately, research related to perishable products supply chains, in particular agricultural food products, has emerged. This is attributed to the additional complexity of managing this type of supply chains with the recently increased concern of public health, food quality, food safety, demand and price variability, and the limited lifetime of these products. Inventory management for agrifood supply chains is of vital importance due to the product perishability and customers- strive for quality. This paper concentrates on developing a simulation model of a real life case study of a two echelon production-distribution system for agri-food products. The objective is to improve a set of performance measures by developing a simulation model that helps in evaluating and analysing the performance of these supply chains. Simulation results showed that it can help in improving overall system performance.Keywords: Agri-food supply chains, inventory model, modelling and Simulation, supply chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33611659 Phytoadaptation in Desert Soil Prediction Using Fuzzy Logic Modeling
Authors: S. Bouharati, F. Allag, M. Belmahdi, M. Bounechada
Abstract:
In terms of ecology forecast effects of desertification, the purpose of this study is to develop a predictive model of growth and adaptation of species in arid environment and bioclimatic conditions. The impact of climate change and the desertification phenomena is the result of combined effects in magnitude and frequency of these phenomena. Like the data involved in the phytopathogenic process and bacteria growth in arid soil occur in an uncertain environment because of their complexity, it becomes necessary to have a suitable methodology for the analysis of these variables. The basic principles of fuzzy logic those are perfectly suited to this process. As input variables, we consider the physical parameters, soil type, bacteria nature, and plant species concerned. The result output variable is the adaptability of the species expressed by the growth rate or extinction. As a conclusion, we prevent the possible strategies for adaptation, with or without shifting areas of plantation and nature adequate vegetation.
Keywords: Climate changes, dry soil, Phytopathogenicity, Predictive model, Fuzzy logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18761658 The Estimation of Human Vital Signs Complexity
Authors: L. Bikulciene, E. Venskaityte, G. Jarusevicius
Abstract:
Nonstationary and nonlinear signals generated by living complex systems defy traditional mechanistic approaches, which are based on homeostasis. Previous our studies have shown that the evaluation of the interactions of physiological signals by using special analysis methods is suitable for observation of physiological processes. It is demonstrated the possibility of using deep physiological model, based on the interpretation of the changes of the human body’s functional states combined with an application of the analytical method based on matrix theory for the physiological signals analysis, which was applied on high risk cardiac patients. It is shown that evaluation of cardiac signals interactions show peculiar for each individual functional changes at the onset of hemodynamic restoration procedure. Therefore, we suggest that the alterations of functional state of the body, after patients overcome surgery can be complemented by the data received from the suggested approach of the evaluation of functional variables’ interactions.
Keywords: Cardiac diseases, Complex systems theory, ECG analysis, matrix analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22471657 Estimation of Skew Angle in Binary Document Images Using Hough Transform
Authors: Nandini N., Srikanta Murthy K., G. Hemantha Kumar
Abstract:
This paper includes two novel techniques for skew estimation of binary document images. These algorithms are based on connected component analysis and Hough transform. Both these methods focus on reducing the amount of input data provided to Hough transform. In the first method, referred as word centroid approach, the centroids of selected words are used for skew detection. In the second method, referred as dilate & thin approach, the selected characters are blocked and dilated to get word blocks and later thinning is applied. The final image fed to Hough transform has the thinned coordinates of word blocks in the image. The methods have been successful in reducing the computational complexity of Hough transform based skew estimation algorithms. Promising experimental results are also provided to prove the effectiveness of the proposed methods.Keywords: Dilation, Document processing, Hough transform, Optical Character Recognition, Skew estimation, and Thinning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32671656 Curriculum Based Measurement and Precision Teaching in Writing Empowerment Enhancement: Results from an Italian Learning Center
Authors: I. Pelizzoni, C. Cavallini, I. Salvaderi, F. Cavallini
Abstract:
We present the improvement in writing skills obtained by 94 participants (aged between six and 10 years) with special educational needs through a writing enhancement program based on fluency principles. The study was planned and conducted with a single-subject experimental plan for each of the participants, in order to confirm the results in the literature. These results were obtained using precision teaching (PT) methodology to increase the number of written graphemes per minute in the pre- and post-test, by curriculum based measurement (CBM). Results indicated an increase in the number of written graphemes for all participants. The average overall duration of the intervention is 144 minutes in five months of treatment. These considerations have been analyzed taking account of the complexity of the implementation of measurement systems in real operational contexts (an Italian learning center) and important aspects of replicability and cost-effectiveness of such interventions.
Keywords: Precision teaching, writing skills, CBM, Italian Learning Center.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7841655 A Block Cipher for Resource-Constrained IoT Devices
Authors: Muhammad Rana, Quazi Mamun, Rafiqul Islam
Abstract:
In the Internet of Things (IoT), many devices are connected and accumulate a sheer amount of data. These Internet-driven raw data need to be transferred securely to the end-users via dependable networks. Consequently, the challenges of IoT security in various IoT domains are paramount. Cryptography is being applied to secure the networks for authentication, confidentiality, data integrity and access control. However, due to the resource constraint properties of IoT devices, the conventional cipher may not be suitable in all IoT networks. This paper designs a robust and effective lightweight cipher to secure the IoT environment and meet the resource-constrained nature of IoT devices. We also propose a symmetric and block-cipher based lightweight cryptographic algorithm. The proposed algorithm increases the complexity of the block cipher, maintaining the lowest computational requirements possible. The proposed algorithm efficiently constructs the key register updating technique, reduces the number of encryption rounds, and adds a layer between the encryption and decryption processes.
Keywords: Internet of Things, IoT, cryptography block cipher, s-box, key management, IoT security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5421654 A New Quantile Based Fuzzy Time Series Forecasting Model
Authors: Tahseen A. Jilani, Aqil S. Burney, C. Ardil
Abstract:
Time series models have been used to make predictions of academic enrollments, weather, road accident, casualties and stock prices, etc. Based on the concepts of quartile regression models, we have developed a simple time variant quantile based fuzzy time series forecasting method. The proposed method bases the forecast using prediction of future trend of the data. In place of actual quantiles of the data at each point, we have converted the statistical concept into fuzzy concept by using fuzzy quantiles using fuzzy membership function ensemble. We have given a fuzzy metric to use the trend forecast and calculate the future value. The proposed model is applied for TAIFEX forecasting. It is shown that proposed method work best as compared to other models when compared with respect to model complexity and forecasting accuracy.
Keywords: Quantile Regression, Fuzzy time series, fuzzy logicalrelationship groups, heuristic trend prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19971653 Unscented Transformation for Estimating the Lyapunov Exponents of Chaotic Time Series Corrupted by Random Noise
Authors: K. Kamalanand, P. Mannar Jawahar
Abstract:
Many systems in the natural world exhibit chaos or non-linear behavior, the complexity of which is so great that they appear to be random. Identification of chaos in experimental data is essential for characterizing the system and for analyzing the predictability of the data under analysis. The Lyapunov exponents provide a quantitative measure of the sensitivity to initial conditions and are the most useful dynamical diagnostic for chaotic systems. However, it is difficult to accurately estimate the Lyapunov exponents of chaotic signals which are corrupted by a random noise. In this work, a method for estimation of Lyapunov exponents from noisy time series using unscented transformation is proposed. The proposed methodology was validated using time series obtained from known chaotic maps. In this paper, the objective of the work, the proposed methodology and validation results are discussed in detail.
Keywords: Lyapunov exponents, unscented transformation, chaos theory, neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19901652 Approximate Bounded Knowledge Extraction Using Type-I Fuzzy Logic
Authors: Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C. Ardil
Abstract:
Using neural network we try to model the unknown function f for given input-output data pairs. The connection strength of each neuron is updated through learning. Repeated simulations of crisp neural network produce different values of weight factors that are directly affected by the change of different parameters. We propose the idea that for each neuron in the network, we can obtain quasi-fuzzy weight sets (QFWS) using repeated simulation of the crisp neural network. Such type of fuzzy weight functions may be applied where we have multivariate crisp input that needs to be adjusted after iterative learning, like claim amount distribution analysis. As real data is subjected to noise and uncertainty, therefore, QFWS may be helpful in the simplification of such complex problems. Secondly, these QFWS provide good initial solution for training of fuzzy neural networks with reduced computational complexity.
Keywords: Crisp neural networks, fuzzy systems, extraction of logical rules, quasi-fuzzy numbers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17401651 Active Power Flow Control Using A TCSC Based Backstepping Controller in Multimachine Power System
Authors: Naimi Abdelhamid, Othmane Abdelkhalek
Abstract:
With the current rise in the demand of electrical energy, present-day power systems which are large and complex, will continue to grow in both size and complexity. Flexible AC Transmission System (FACTS) controllers provide new facilities, both in steady state power flow control and dynamic stability control. Thyristor Controlled Series Capacitor (TCSC) is one of FACTS equipment, which is used for power flow control of active power in electric power system and for increase of capacities of transmission lines. In this paper, a Backstepping Power Flow Controller (BPFC) for TCSC in multimachine power system is developed and tested. The simulation results show that the TCSC proposed controller is capable of controlling the transmitted active power and improving the transient stability when compared with conventional PI Power Flow Controller (PIPFC).
Keywords: FACTS, Thyristor Controlled Series Capacitor (TCSC), Backstepping, BPFC, PIPFC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17951650 Study on the Integration Schemes and Performance Comparisons of Different Integrated Solar Combined Cycle-Direct Steam Generation Systems
Authors: Liqiang Duan, Ma Jingkai, Lv Zhipeng, Haifan Cai
Abstract:
The integrated solar combined cycle (ISCC) system has a series of advantages such as increasing the system power generation, reducing the cost of solar power generation, less pollutant and CO2 emission. In this paper, the parabolic trough collectors with direct steam generation (DSG) technology are considered to replace the heat load of heating surfaces in heat regenerator steam generation (HRSG) of a conventional natural gas combined cycle (NGCC) system containing a PG9351FA gas turbine and a triple pressure HRSG with reheat. The detailed model of the NGCC system is built in ASPEN PLUS software and the parabolic trough collectors with DSG technology is modeled in EBSILON software. ISCC-DSG systems with the replacement of single, two, three and four heating surfaces are studied in this paper. Results show that: (1) the ISCC-DSG systems with the replacement heat load of HPB, HPB+LPE, HPE2+HPB+HPS, HPE1+HPE2+ HPB+HPS are the best integration schemes when single, two, three and four stages of heating surfaces are partly replaced by the parabolic trough solar energy collectors with DSG technology. (2) Both the changes of feed water flow and the heat load of the heating surfaces in ISCC-DSG systems with the replacement of multi-stage heating surfaces are smaller than those in ISCC-DSG systems with the replacement of single heating surface. (3) ISCC-DSG systems with the replacement of HPB+LPE heating surfaces can increase the solar power output significantly. (4) The ISCC-DSG systems with the replacement of HPB heating surfaces has the highest solar-thermal-to-electricity efficiency (47.45%) and the solar radiation energy-to-electricity efficiency (30.37%), as well as the highest exergy efficiency of solar field (33.61%).
Keywords: HRSG, integration scheme, parabolic trough collectors with DSG technology, solar power generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8921649 A Calibration Approach towards Reducing ASM2d Parameter Subsets in Phosphorus Removal Processes
Authors: N.Boontian
Abstract:
A novel calibration approach that aims to reduce ASM2d parameter subsets and decrease the model complexity is presented. This approach does not require high computational demand and reduces the number of modeling parameters required to achieve the ASMs calibration by employing a sensitivity and iteration methodology. Parameter sensitivity is a crucial factor and the iteration methodology enables refinement of the simulation parameter values. When completing the iteration process, parameters values are determined in descending order of their sensitivities. The number of iterations required is equal to the number of model parameters of the parameter significance ranking. This approach was used for the ASM2d model to the evaluated EBPR phosphorus removal and it was successful. Results of the simulation provide calibration parameters. These included YPAO, YPO4, YPHA, qPHA, qPP, μPAO, bPAO, bPP, bPHA, KPS, YA, μAUT, bAUT, KO2 AUT, and KNH4 AUT. Those parameters were corresponding to the experimental data available.Keywords: ASM2d, calibration approach, iteration methodology, sensitivity, phosphorus removal
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24201648 Comparison of Different Methods to Produce Fuzzy Tolerance Relations for Rainfall Data Classification in the Region of Central Greece
Authors: N. Samarinas, C. Evangelides, C. Vrekos
Abstract:
The aim of this paper is the comparison of three different methods, in order to produce fuzzy tolerance relations for rainfall data classification. More specifically, the three methods are correlation coefficient, cosine amplitude and max-min method. The data were obtained from seven rainfall stations in the region of central Greece and refers to 20-year time series of monthly rainfall height average. Three methods were used to express these data as a fuzzy relation. This specific fuzzy tolerance relation is reformed into an equivalence relation with max-min composition for all three methods. From the equivalence relation, the rainfall stations were categorized and classified according to the degree of confidence. The classification shows the similarities among the rainfall stations. Stations with high similarity can be utilized in water resource management scenarios interchangeably or to augment data from one to another. Due to the complexity of calculations, it is important to find out which of the methods is computationally simpler and needs fewer compositions in order to give reliable results.
Keywords: Classification, fuzzy logic, tolerance relations, rainfall data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10261647 Effect of Plasticizer Additives on the Mechanical Properties of Cement Composite – A Molecular Dynamics Analysis
Authors: R. Mohan, V. Jadhav, A. Ahmed, J. Rivas, A. Kelkar
Abstract:
Cementitious materials are an excellent example of a composite material with complex hierarchical features and random features that range from nanometer (nm) to millimeter (mm) scale. Multi-scale modeling of complex material systems requires starting from fundamental building blocks to capture the scale relevant features through associated computational models. In this paper, molecular dynamics (MD) modeling is employed to predict the effect of plasticizer additive on the mechanical properties of key hydrated cement constituent calcium-silicate-hydrate (CSH) at the molecular, nanometer scale level. Due to complexity, still unknown molecular configuration of CSH, a representative configuration widely accepted in the field of mineral Jennite is employed. The effectiveness of the Molecular Dynamics modeling to understand the predictive influence of material chemistry changes based on molecular / nanoscale models is demonstrated.
Keywords: Cement composite, Mechanical Properties, Molecular Dynamics, Plasticizer additives.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25681646 Metaheuristic Algorithms for Decoding Binary Linear Codes
Authors: Hassan Berbia, Faissal Elbouanani, Rahal Romadi, Mostafa Belkasmi
Abstract:
This paper introduces two decoders for binary linear codes based on Metaheuristics. The first one uses a genetic algorithm and the second is based on a combination genetic algorithm with a feed forward neural network. The decoder based on the genetic algorithms (DAG) applied to BCH and convolutional codes give good performances compared to Chase-2 and Viterbi algorithm respectively and reach the performances of the OSD-3 for some Residue Quadratic (RQ) codes. This algorithm is less complex for linear block codes of large block length; furthermore their performances can be improved by tuning the decoder-s parameters, in particular the number of individuals by population and the number of generations. In the second algorithm, the search space, in contrast to DAG which was limited to the code word space, now covers the whole binary vector space. It tries to elude a great number of coding operations by using a neural network. This reduces greatly the complexity of the decoder while maintaining comparable performances.Keywords: Block code, decoding, methaheuristic, genetic algorithm, neural network
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20811645 High Performance in Parallel Data Integration: An Empirical Evaluation of the Ratio Between Processing Time and Number of Physical Nodes
Authors: Caspar von Seckendorff, Eldar Sultanow
Abstract:
Many studies have shown that parallelization decreases efficiency [1], [2]. There are many reasons for these decrements. This paper investigates those which appear in the context of parallel data integration. Integration processes generally cannot be allocated to packages of identical size (i. e. tasks of identical complexity). The reason for this is unknown heterogeneous input data which result in variable task lengths. Process delay is defined by the slowest processing node. It leads to a detrimental effect on the total processing time. With a real world example, this study will show that while process delay does initially increase with the introduction of more nodes it ultimately decreases again after a certain point. The example will make use of the cloud computing platform Hadoop and be run inside Amazon-s EC2 compute cloud. A stochastic model will be set up which can explain this effect.
Keywords: Process delay, speedup, efficiency, parallel computing, data integration, E-Commerce, Amazon Elastic Compute Cloud (EC2), Hadoop, Nutch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16291644 Toward an Open Network Business Approach
Authors: Valentina Ndou, Laura Schina, Giuseppina Passiante, Pasquale Del Vecchio, Marco De Maggio
Abstract:
The aim of this paper is to propose a dynamic integrated approach, based on modularity concept and on the business ecosystem approach, that exploit different eBusiness services for SMEs under an open business network platform. The adoption of this approach enables firms to collaborate locally for delivering the best product/service to the customers as well as globally by accessing international markets, interrelate directly with the customers, create relationships and collaborate with worldwide actors. The paper will be structured as following: We will start by offering an overview of the state of the art of eBusiness platforms among SME of food and tourism firms and then we discuss the main drawbacks that characterize them. The digital business ecosystem approach and the modularity concept will be described as the theoretical ground in which our proposed integrated model is rooted. Finally, the proposed model along with a discussion of the main value creation potentialities it might create for SMEs will be presented.
Keywords: component, Complexity; Digital Business Ecosystem, e Business Platforms, Modularity, Networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14641643 Enhanced Shell Sorting Algorithm
Authors: Basit Shahzad, Muhammad Tanvir Afzal
Abstract:
Many algorithms are available for sorting the unordered elements. Most important of them are Bubble sort, Heap sort, Insertion sort and Shell sort. These algorithms have their own pros and cons. Shell Sort which is an enhanced version of insertion sort, reduces the number of swaps of the elements being sorted to minimize the complexity and time as compared to insertion sort. Shell sort improves the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worst-case time is O(n1.5). It performs certain iterations. In each iteration it swaps some elements of the array in such a way that in last iteration when the value of h is one, the number of swaps will be reduced. Donald L. Shell invented a formula to calculate the value of ?h?. this work focuses to identify some improvement in the conventional Shell sort algorithm. ''Enhanced Shell Sort algorithm'' is an improvement in the algorithm to calculate the value of 'h'. It has been observed that by applying this algorithm, number of swaps can be reduced up to 60 percent as compared to the existing algorithm. In some other cases this enhancement was found faster than the existing algorithms available.Keywords: Algorithm, Computation, Shell, Sorting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31361642 Comparison of Seismic Retrofitting Methods for Existing Foundations in Seismological Active Regions
Authors: Peyman Amini Motlagh, Ali Pak
Abstract:
Seismic retrofitting of important structures is essential in seismological active zones. The importance is doubled when it comes to some buildings like schools, hospitals, bridges etc. because they are required to continue their serviceability even after a major earthquake. Generally, seismic retrofitting codes have paid little attention to retrofitting of foundations due to its construction complexity. In this paper different methods for seismic retrofitting of tall buildings’ foundations will be discussed and evaluated. Foundations are considered in three different categories. First, foundations those are in danger of liquefaction of their underlying soil. Second, foundations located on slopes in seismological active regions. Third, foundations designed according to former design codes and may show structural defects under earthquake loads. After describing different methods used in different countries for retrofitting of the existing foundations in seismological active regions, comprehensive comparison between these methods with regard to the above mentioned categories is carried out. This paper gives some guidelines to choose the best method for seismic retrofitting of tall buildings’ foundations in retrofitting projects.
Keywords: Existing foundation, landslide, liquefaction, seismic retrofitting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42891641 Radioactivity Assessment of Sediments in Negombo Lagoon Sri Lanka
Authors: H. M. N. L. Handagiripathira
Abstract:
The distributions of naturally occurring and anthropogenic radioactive materials were determined in surface sediments taken at 27 different locations along the bank of Negombo Lagoon in Sri Lanka. Hydrographic parameters of lagoon water and the grain size analyses of the sediment samples were also carried out for this study. The conductivity of the adjacent water was varied from 13.6 mS/cm to 55.4 mS/cm near to the southern end and the northern end of the lagoon, respectively, and equally salinity levels varied from 7.2 psu to 32.1 psu. The average pH in the water was 7.6 and average water temperature was 28.7 °C. The grain size analysis emphasized the mass fractions of the samples as sand (60.9%), fine sand (30.6%) and fine silt+clay (1.3%) in the sampling locations. The surface sediment samples of wet weight, 1 kg each from upper 5-10 cm layer, were oven dried at 105 °C for 24 hours to get a constant weight, homogenized and sieved through a 2 mm sieve (IAEA technical series no. 295). The radioactivity concentrations were determined using gamma spectrometry technique. Ultra Low Background Broad Energy High Purity Ge Detector, BEGe (Model BE5030, Canberra) was used for radioactivity measurement with Canberra Industries' Laboratory Source-less Calibration Software (LabSOCS) mathematical efficiency calibration approach and Geometry composer software. The mean activity concentration was found to be 24 ± 4, 67 ± 9, 181 ± 10, 59 ± 8, 3.5 ± 0.4 and 0.47 ± 0.08 Bq/kg for 238U, 232Th, 40K, 210Pb, 235U and 137Cs respectively. The mean absorbed dose rate in air, radium equivalent activity, external hazard index, annual gonadal dose equivalent and annual effective dose equivalent were 60.8 nGy/h, 137.3 Bq/kg, 0.4, 425.3 mSv/year and 74.6 mSv/year, respectively. The results of this study will provide baseline information on the natural and artificial radioactive isotopes and environmental pollution associated with information on radiological risk.
Keywords: Gamma spectrometry, lagoon, radioactivity, sediments.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5241640 Translator Design to Model Cpp Files
Authors: Er. Satwinder Singh, Dr. K.S. Kahlon, Rakesh Kumar, Er. Gurjeet Singh
Abstract:
The most reliable and accurate description of the actual behavior of a software system is its source code. However, not all questions about the system can be answered directly by resorting to this repository of information. What the reverse engineering methodology aims at is the extraction of abstract, goal-oriented “views" of the system, able to summarize relevant properties of the computation performed by the program. While concentrating on reverse engineering we had modeled the C++ files by designing the translator.
Keywords: Translator, Modeling, UML, DYNO, ISVis, TED.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15341639 A Query Optimization Strategy for Autonomous Distributed Database Systems
Authors: Dina K. Badawy, Dina M. Ibrahim, Alsayed A. Sallam
Abstract:
Distributed database is a collection of logically related databases that cooperate in a transparent manner. Query processing uses a communication network for transmitting data between sites. It refers to one of the challenges in the database world. The development of sophisticated query optimization technology is the reason for the commercial success of database systems, which complexity and cost increase with increasing number of relations in the query. Mariposa, query trading and query trading with processing task-trading strategies developed for autonomous distributed database systems, but they cause high optimization cost because of involvement of all nodes in generating an optimal plan. In this paper, we proposed a modification on the autonomous strategy K-QTPT that make the seller’s nodes with the lowest cost have gradually high priorities to reduce the optimization time. We implement our proposed strategy and present the results and analysis based on those results.
Keywords: Autonomous strategies, distributed database systems, high priority, query optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10571638 Improved Processing Speed for Text Watermarking Algorithm in Color Images
Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari
Abstract:
Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.
Keywords: Steganography, watermarking, private keys, time complexity measurements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8161637 Magnetic Field Analysis for a Distribution Transformer with Unbalanced Load Conditions by using 3-D Finite Element Method
Authors: P. Meesuk, T. Kulworawanichpong, P. Pao-la-or
Abstract:
This paper proposes a set of quasi-static mathematical model of magnetic fields caused by high voltage conductors of distribution transformer by using a set of second-order partial differential equation. The modification for complex magnetic field analysis and time-harmonic simulation are also utilized. In this research, transformers were study in both balanced and unbalanced loading conditions. Computer-based simulation utilizing the threedimensional finite element method (3-D FEM) is exploited as a tool for visualizing magnetic fields distribution volume a distribution transformer. Finite Element Method (FEM) is one among popular numerical methods that is able to handle problem complexity in various forms. At present, the FEM has been widely applied in most engineering fields. Even for problems of magnetic field distribution, the FEM is able to estimate solutions of Maxwell-s equations governing the power transmission systems. The computer simulation based on the use of the FEM has been developed in MATLAB programming environment.Keywords: Distribution Transformer, Magnetic Field, Load Unbalance, 3-D Finite Element Method (3-D FEM)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2692