Search results for: parallel bft
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1162

Search results for: parallel bft

922 A Review of Machine Learning for Big Data

Authors: Devatha Kalyan Kumar, Aravindraj D., Sadathulla A.

Abstract:

Big data are now rapidly expanding in all engineering and science and many other domains. The potential of large or massive data is undoubtedly significant, make sense to require new ways of thinking and learning techniques to address the various big data challenges. Machine learning is continuously unleashing its power in a wide range of applications. In this paper, the latest advances and advancements in the researches on machine learning for big data processing. First, the machine learning techniques methods in recent studies, such as deep learning, representation learning, transfer learning, active learning and distributed and parallel learning. Then focus on the challenges and possible solutions of machine learning for big data.

Keywords: active learning, big data, deep learning, machine learning

Procedia PDF Downloads 404
921 Study of the Vertical Handoff in Heterogeneous Networks and Implement Based on Opnet

Authors: Wafa Benaatou, Adnane Latif

Abstract:

In this document we studied more in detail the Performances of the vertical handover in the networks WLAN, WiMAX, UMTS before studying of it the Procedure of Handoff Vertical, the whole buckled by simulations putting forward the performances of the handover in the heterogeneous networks. The goal of Vertical Handover is to carry out several accesses in real-time in the heterogeneous networks. This makes it possible a user to use several networks (such as WLAN UMTS and WiMAX) in parallel, and the system to commutate automatically at another basic station, without disconnecting itself, as if there were no cut and with little loss of data as possible.

Keywords: vertical handoff, WLAN, UMTS, WIMAX, heterogeneous

Procedia PDF Downloads 357
920 Light and Scanning Electron Microscopic Studies on Corneal Ontogeny in Buffalo

Authors: M. P. S. Tomar, Neelam Bansal

Abstract:

Histomorphological, histochemical and scanning electron microscopic observations were recorded in developing cornea of buffalo fetuses. The samples from fetal cornea were collected in appropriate fixative from slaughter house and Veterinary Clinics, GADVASU, Ludhiana. The microscopic slides were stained for detailed histomorphological and histochemical studies. The scanning electron microscopic studies were performed at Electron microscopy & Nanobiology Lab, PAU Ludhiana. In present study, it was observed that, in 36 days (d) fetus, the corneal epithelium was well marked single layered structure which was placed on stroma mesenchyme. Cornea appeared as the continuation of developing sclera. The thickness of cornea and its epithelium increased as well as the epithelium started becoming double layered in 47d fetus at corneo-scleral junction. The corneal thickness in this stage suddenly increased thus easily distinguished from developing sclera. The separation of corneal endothelium from stroma was evident as a single layered epithelium. The stroma possessed numerous fibroblasts in 49d stage eye. Descemet’s membrane was appeared at 52d stage. The limbus area was separated by a depression from the developing cornea in 61d stage. In 65d stage, the Bowman’s layer was more developed. Fibroblasts were arranged parallel to each other as well as parallel to the surface of developing cornea in superficial layers. These fibroblasts and fibers were arranged in wavy pattern in the center of stroma. Corneal epithelium started to be stratified as a double layered epithelium was present in this age of fetal eye. In group II (>120 Days), the corneal epithelium was stratified towards a well marked irido-corneal angle. The stromal fibroblasts followed a complete parallel arrangement in its entire thickness. In full term fetuses, a well developed cornea was observed. It was a fibrous layer which had five distinct layers. From outside to inwards were described as the outer most layer was the 7-8 layered corneal epithelial, subepithelial basement membrane (Bowman’s membrane), substantia propria or stroma, posterior limiting membrane (Descemet’s membrane) and the posterior epithelium (corneal endothelium). The corneal thickness and connective tissue elements were continued to be increased. It was 121.39 + 3.73µ at 36d stage which increased to 518.47 + 4.98 µ in group III fetuses. In fetal life, the basement membrane of corneal epithelium and endothelium depicted strong to intense periodic Acid Schiff’s (PAS) reaction. At the irido-corneal angle, the endothelium of blood vessels was also positive for PAS activity. However, cornea was found mild positive for alcian blue reaction. The developing cornea showed strong reaction for basic proteins in outer epithelium and the inner endothelium layers. Under low magnification scanning electron microscope, cornea showed two types of cells viz. light cells and dark cells. The light cells were smaller in size and had less number of microvilli in their surface than in the dark cells. Despite these surface differences between light and dark cells, the corneal surface showed the same general pattern of microvilli studding all exposed surfaces out to the cell margin. which were long (with variable height), slight tortuous slender and possessed a micro villus shaft with a very prominent knob.

Keywords: buffalo, cornea, eye, fetus, ontogeny, scanning electron microscopy

Procedia PDF Downloads 120
919 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets

Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe

Abstract:

Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.

Keywords: biomedical research, genomics, information systems, software

Procedia PDF Downloads 236
918 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing

Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou

Abstract:

The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.

Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation

Procedia PDF Downloads 78
917 Cloud Design for Storing Large Amount of Data

Authors: M. Strémy, P. Závacký, P. Cuninka, M. Juhás

Abstract:

Main goal of this paper is to introduce our design of private cloud for storing large amount of data, especially pictures, and to provide good technological backend for data analysis based on parallel processing and business intelligence. We have tested hypervisors, cloud management tools, storage for storing all data and Hadoop to provide data analysis on unstructured data. Providing high availability, virtual network management, logical separation of projects and also rapid deployment of physical servers to our environment was also needed.

Keywords: cloud, glusterfs, hadoop, juju, kvm, maas, openstack, virtualization

Procedia PDF Downloads 327
916 Ant-Tracking Attribute: A Model for Understanding Production Response

Authors: Prince Suka Neekia Momta, Rita Iheoma Achonyeulo

Abstract:

Ant Tracking seismic attribute applied over 4-seconds seismic volume revealed structural features triggered by clay diapirism, growth fault development, rapid deltaic sedimentation and intense drilling. The attribute was extracted on vertical seismic sections and time slices. Mega tectonic structures such as growth faults and clay diapirs are visible on vertical sections with obscured minor lineaments or fractures. Fractures are distinctively visible on time slices yielding recognizable patterns corroborating established geologic models. This model seismic attribute enabled the understanding of fluid flow characteristics and production responses. Three structural patterns recognized in the field include: major growth faults, minor faults or lineaments and network of fractures. Three growth faults mapped on seismic section form major deformation bands delimiting the area into three blocks or depocenters. The growth faults trend E-W, dip down-to-south in the basin direction, and cut across the study area. The faults initiating from about 2000ms extended up to 500ms, and tend to progress parallel and opposite to the growth direction of an upsurging diapiric structure. The diapiric structures form the major deformational bands originating from great depths (below 2000ms) and rising to about 1200ms where series of sedimentary layers onlapped and pinchout stratigraphically against the diapir. Several other secondary faults or lineaments that form parallel streaks to one another also accompanied the growth faults. The fracture networks have no particular trend but form a network surrounding the well area. Faults identified in the study area have potentials for structural hydrocarbon traps whereas the presence of fractures created a fractured-reservoir condition that enhanced rapid fluid flow especially water. High aquifer flow potential aided by possible fracture permeability resulted in rapid decline in oil rate. Through the application of Ant Tracking attribute, it is possible to obtain detailed interpretation of structures that can have direct influence on oil and gas production.

Keywords: seismic, attributes, production, structural

Procedia PDF Downloads 15
915 Learning Recomposition after the Remote Period with Finalist Students of the Technical Course in the Environment of the Ifpa, Paragominas Campus, Pará State, Brazilian Amazon

Authors: Liz Carmem Silva-Pereira, Raffael Alencar Mesquita Rodrigues, Francisco Helton Mendes Barbosa, Emerson de Freitas Ferreira

Abstract:

Due to the Covid-19 pandemic declared in March 2020 by the World Health Organization, the way of social coexistence across the planet was affected, especially in educational processes, from the implementation of the remote modality as a teaching strategy. This teaching-learning modality caused a change in the routine and learning of basic education students, which resulted in serious consequences for the return to face-to-face teaching in 2021. 2022, at the Federal Institute of Education, Science and Technology of Pará (IFPA) – Campus Paragominas had their training process severely affected, having studied the initial half of their training in the remote modality, which compromised the carrying out of practical classes, technical visits and field classes, essential for the student formation on the environmental technician. With the objective of promoting the recomposition of these students' learning after returning to the face-to-face modality, an educational strategy was developed in the last period of the course. As teaching methodologies were used for research as an educational principle, the integrative project and the parallel recovery action applied jointly, aiming at recomposing the basic knowledge of the natural sciences, together with the technical knowledge of the environmental area applied to the course. The project assisted 58 finalist students of the environmental technical course. A research instrument was elaborated with parameters of evaluation of the environmental quality for study in 19 collection points, in the Uraim River urban hydrographic basin, in the Paragominas City – Pará – Brazilian Amazon. Students were separated into groups under the professors' and laboratory assistants’ orientation, and in the field, they observed and evaluated the places' environmental conditions and collected physical data and water samples, which were taken to the chemistry and biology laboratories at Campus Paragominas for further analysis. With the results obtained, each group prepared a technical report on the environmental conditions of each evaluated point. This work methodology enabled the practical application of theoretical knowledge received in various disciplines during the remote teaching modality, contemplating the integration of knowledge, people, skills, and abilities for the best technical training of finalist students. At the activity end, the satisfaction of the involved students in the project was evaluated, through a form, with the signing of the informed consent term, using the Likert scale as an evaluation parameter. The results obtained in the satisfaction survey were: on the use of research projects within the disciplines attended, 82% of satisfaction was obtained; regarding the revision of contents in the execution of the project, 84% of satisfaction was obtained; regarding the acquired field experience, 76.9% of satisfaction was obtained, regarding the laboratory experience, 86.2% of satisfaction was obtained, and regarding the use of this methodology as parallel recovery, 71.8% was obtained of satisfaction. In addition to the excellent performance of students in acquiring knowledge, it was possible to remedy the deficiencies caused by the absence of practical classes, technical visits, and field classes, which occurred during the execution of the remote teaching modality, fulfilling the desired educational recomposition.

Keywords: integrative project, parallel recovery, research as an educational principle, teaching-learning

Procedia PDF Downloads 33
914 Modified Montgomery for RSA Cryptosystem

Authors: Rupali Verma, Maitreyee Dutta, Renu Vig

Abstract:

Encryption and decryption in RSA are done by modular exponentiation which is achieved by repeated modular multiplication. Hence, efficiency of modular multiplication directly determines the efficiency of RSA cryptosystem. This paper designs a Modified Montgomery Modular multiplication in which addition of operands is computed by 4:2 compressor. The basic logic operations in addition are partitioned over two iterations such that parallel computations are performed. This reduces the critical path delay of proposed Montgomery design. The proposed design and RSA are implemented on Virtex 2 and Virtex 5 FPGAs. The two factors partitioning and parallelism have improved the frequency and throughput of proposed design.

Keywords: RSA, montgomery modular multiplication, 4:2 compressor, FPGA

Procedia PDF Downloads 381
913 Heat and Mass Transfer of an Oscillating Flow in a Porous Channel with Chemical Reaction

Authors: Zahra Neffah, Henda Kahalerras

Abstract:

A numerical study is made in a parallel-plate porous channel subjected to an oscillating flow and an exothermic chemical reaction on its walls. The flow field in the porous region is modeled by the Darcy–Brinkman–Forchheimer model and the finite volume method is used to solve the governing equations. The effects of the modified Frank-Kamenetskii (FKm) and Damköhler (Dm) numbers, the amplitude of oscillation (A), and the Strouhal number (St) are examined. The main results show an increase of heat and mass transfer rates with A and St, and their decrease with FKm and Dm.

Keywords: chemical reaction, heat and mass transfer, oscillating flow, porous channel

Procedia PDF Downloads 381
912 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting

Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade

Abstract:

The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.

Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit

Procedia PDF Downloads 126
911 Enhancement Effect of Electromagnetic Field on Separation of Edible Oil from Oil-Water Emulsion

Authors: Olfat A. Fadali, Mohamed S. Mahmoud, Omnia H. Abdelraheem, Shimaa G. Mohammed

Abstract:

The effect of electromagnetic field (EMF) on the removal of edible oil from oil-in-water emulsion by means of electrocoagulation was investigated in rectangular batch electrochemical cell with DC current. Iron (Fe) plate anodes and stainless steel cathodes were employed as electrodes. The effect of different magnetic field intensities (1.9, 3.9 and 5.2 tesla), three different positions of EMF (below, perpendicular and parallel to the electrocoagulation cell), as well as operating time; had been investigated. The application of electromagnetic field (5.2 tesla) raises percentage of oil removal from 72.4% for traditional electrocoagulation to 90.8% after 20 min.

Keywords: electrocoagulation, electromagnetic field, Oil-water emulsion, edible oil

Procedia PDF Downloads 501
910 Temperature Investigations in Two Type of Crimped Connection Using Experimental Determinations

Authors: C. F. Ocoleanu, A. I. Dolan, G. Cividjian, S. Teodorescu

Abstract:

In this paper we make a temperature investigations in two type of superposed crimped connections using experimental determinations. All the samples use 8 copper wire 7.1 x 3 mm2 crimped by two methods: the first method uses one crimp indents and the second is a proposed method with two crimp indents. The ferrule is a parallel one. We study the influence of number and position of crimp indents. The samples are heated in A.C. current at different current values until steady state heating regime. After obtaining of temperature values, we compare them and present the conclusion.

Keywords: crimped connections, experimental determinations, temperature, heat transfer

Procedia PDF Downloads 240
909 Error Estimation for the Reconstruction Algorithm with Fan Beam Geometry

Authors: Nirmal Yadav, Tanuja Srivastava

Abstract:

Shannon theory is an exact method to recover a band limited signals from its sampled values in discrete implementation, using sinc interpolators. But sinc based results are not much satisfactory for band-limited calculations so that convolution with window function, having compact support, has been introduced. Convolution Backprojection algorithm with window function is an approximation algorithm. In this paper, the error has been calculated, arises due to this approximation nature of reconstruction algorithm. This result will be defined for fan beam projection data which is more faster than parallel beam projection.

Keywords: computed tomography, convolution backprojection, radon transform, fan beam

Procedia PDF Downloads 455
908 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 253
907 Evaluation of Teaching Team Stress Factors in Two Engineering Education Programs

Authors: Kari Bjorn

Abstract:

Team learning has been studied and modeled as double loop model and its variations. Also, metacognition has been suggested as a concept to describe the nature of team learning to be more than a simple sum of individual learning of the team members. Team learning has a positive correlation with both individual motivation of its members, as well as the collective factors within the team. Team learning of previously very independent members of two teaching teams is analyzed. Applied Science Universities are training future professionals with ever more diversified and multidisciplinary skills. The size of the units of teaching and learning are increasingly larger for several reasons. First, multi-disciplinary skill development requires more active learning and richer learning environments and learning experiences. This occurs on students teams. Secondly, teaching of multidisciplinary skills requires a multidisciplinary and team-based teaching from the teachers as well. Team formation phases have been identifies and widely accepted. Team role stress has been analyzed in project teams. Projects typically have a well-defined goal and organization. This paper explores team stress of two teacher teams in a parallel running two course units in engineering education. The first is an Industrial Automation Technology and the second is Development of Medical Devices. The courses have a separate student group, and they are in different campuses. Both are run in parallel within 8 week time. Both of them are taught by a group of four teachers with several years of teaching experience, but individually. The team role stress scale items - the survey is done to both teaching groups at the beginning of the course and at the end of the course. The inventory of questions covers the factors of ambiguity, conflict, quantitative role overload and qualitative role overload. Some comparison to the study on project teams can be drawn. Team development stage of the two teaching groups is different. Relating the team role stress factors to the development stage of the group can reveal the potential of management actions to promote team building and to understand the maturity of functional and well-established teams. Mature teams indicate higher job satisfaction and deliver higher performance. Especially, teaching teams who deliver highly intangible results of learning outcome are sensitive to issues in the job satisfaction and team conflicts. Because team teaching is increasing, the paper provides a review of the relevant theories and initial comparative and longitudinal results of the team role stress factors applied to teaching teams.

Keywords: engineering education, stress, team role, team teaching

Procedia PDF Downloads 195
906 Using Photogrammetric Techniques to Map the Mars Surface

Authors: Ahmed Elaksher, Islam Omar

Abstract:

For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.

Keywords: mars, photogrammetry, MOLA, HiRISE

Procedia PDF Downloads 35
905 A Model for Analysis the Induced Voltage of 115 kV On-Line Acting on Neighboring 22 kV Off-Line

Authors: Sakhon Woothipatanapan, Surasit Prakobkit

Abstract:

This paper presents a model for analysis the induced voltage of transmission lines (energized) acting on neighboring distribution lines (de-energized). From environmental restrictions, 22 kV distribution lines need to be installed under 115 kV transmission lines. With the installation of the two parallel circuits like this, they make the induced voltage which can cause harm to operators. This work was performed with the ATP-EMTP modeling to analyze such phenomenon before field testing. Simulation results are used to find solutions to prevent danger to operators who are on the pole.

Keywords: transmission system, distribution system, induced voltage, off-line operation

Procedia PDF Downloads 576
904 Finetuned Transformers for Translating Multi Dialect Texts to MSA

Authors: Tahar Alimi, Rahma Boujelbane, Wiem Derouich, Lamia Hadrich Belguith

Abstract:

Machine translation task of low-resourced languages such as Arabic is a challenging task. Despite the appearance of sophisticated models based on the latest deep learning techniques, namely the transfer learning and transformers, all models prove incapable of carrying out an acceptable translation which includes Arabic dialects because they not official status. In this paper, a machine translation model designed to translate Arabic multidialectal content into Modern Standard Arabic (MSA), leveraging both new and existing parallel resources. The latter achieved the best results for both Levantine and Maghrebi dialects with BLEU score of 64.99.

Keywords: Arabic translation, dialect translation, fine-tune, msa translation, transformer, translation

Procedia PDF Downloads 8
903 On the Approximate Solution of Continuous Coefficients for Solving Third Order Ordinary Differential Equations

Authors: A. M. Sagir

Abstract:

This paper derived four newly schemes which are combined in order to form an accurate and efficient block method for parallel or sequential solution of third order ordinary differential equations of the form y^'''= f(x,y,y^',y^'' ), y(α)=y_0,〖y〗^' (α)=β,y^('' ) (α)=μ with associated initial or boundary conditions. The implementation strategies of the derived method have shown that the block method is found to be consistent, zero stable and hence convergent. The derived schemes were tested on stiff and non-stiff ordinary differential equations, and the numerical results obtained compared favorably with the exact solution.

Keywords: block method, hybrid, linear multistep, self-starting, third order ordinary differential equations

Procedia PDF Downloads 241
902 An Analytical Approach for the Fracture Characterization in Concrete under Fatigue Loading

Authors: Bineet Kumar

Abstract:

Many civil engineering infrastructures frequently encounter repetitive loading during their service life. Due to the inherent complexity observed in concrete, like quasi-brittle materials, understanding the fatigue behavior in concrete still posesa challenge. Moreover, the fracture process zone characteristics ahead of the crack tip have been observed to be different in fatigue loading than in the monotonic cases. Therefore, it is crucial to comprehend the energy dissipation associated with the fracture process zone (FPZ) due to repetitive loading. It is well known that stiffness degradation due to cyclic loadingprovides a better understanding of the fracture behavior of concrete. Under repetitive load cycles, concrete members exhibit a two-stage stiffness degradation process. Experimentally it has been observed that the stiffness decreases initially with an increase in crack length and subsequently increases. In this work, an attempt has been made to propose an analytical expression to predict energy dissipation and later the stiffness degradation as a function of crack length. Three-point bend specimens have been considered in the present work to derive the formulations. In this approach, the expression for the resultant stress distribution below the neutral axis has been derived by correlating the bending stress with the cohesive stresses developed ahead of the crack tip due to the existence of the fracture process zone. This resultant stress expression is utilized to estimate the dissipated energydue to crack propagation as a function of crack length. Further, the formulation for the stiffness degradation has been developed by relating the dissipated energy with the work done. It can be used to predict the critical crack length and fatigue life. An attempt has been made to understand the influence of stress amplitude on the damage pattern by using the information on the rate of stiffness degradation. It has been demonstrated that with the increase in the stress amplitude, the damage/FPZ proceeds more in the direction of crack propagation compared to the damage in the direction parallel to the span of the beam, which causes a lesser rate of stiffness degradation for the incremental crack length. Further, the effect of loading frequency has been investigated in terms of stiffness degradation. Under low-frequency loading cases, the damage/FPZ has been found to spread more in the direction parallel to the span, in turn reducing the critical crack length and fatigue life. In such a case, a higher rate of stiffness degradation has been observed in comparison to the high-frequency loading case.

Keywords: fatigue life, fatigue, fracture, concrete

Procedia PDF Downloads 60
901 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 333
900 Prosthesis Design for Bilateral Hip Disarticulation Management

Authors: Mauricio Plaza, Willian Aperador

Abstract:

Hip disarticulation is an amputation through the hip joint capsule, removing the entire lower extremity, with a closure of the remaining musculature over the exposed acetabulum. Tumors of the distal and proximal femur were treated by total femur resection; a hip disarticulation sometimes is a performance for massive trauma with crush injuries to the lower extremity. This article discusses the design a system for rehabilitation of a patient with bilateral hip disarticulations. The prosthetics designed allowed the patient to do natural gait suspended between parallel articulate crutches with the body weight support between the crutches. The care of this patient was a challenge due to bilateral amputations at such a high level and the special needs of a patient mobility.

Keywords: amputation, prosthesis, mobility, hemipelvectomy

Procedia PDF Downloads 380
899 Performance Based Logistics and Applications in Turkey

Authors: Ferhat Yilmaz

Abstract:

Defense sector is one of the most important areas where logistics is used extensively. Nations give importance to their defense spending in order to survive in their geography. Parallel to the rising crises around the world, governments increase their defense spending; however, resources are limited while the needs are infinite. Therefore, countries try to develop a more effective use of their defense budget. In order to make logistics more effective and efficient, performance- based logistical system was developed. This article tries to explain the Performance-based Logistical System, its employment process, employment areas, and how it will be used along with other main systems in the Turkey.

Keywords: performance, performance based logistics applications, logistical system, Turkey

Procedia PDF Downloads 455
898 A Theoretical Approach of Tesla Pump

Authors: Cristian Sirbu-Dragomir, Stefan-Mihai Sofian, Adrian Predescu

Abstract:

This paper aims to study Tesla pumps for circulating biofluids. It is desired to make a small pump for the circulation of biofluids. This type of pump will be studied because it has the following characteristics: It doesn’t have blades which results in very small frictions; Reduced friction forces; Low production cost; Increased adaptability to different types of fluids; Low cavitation (towards 0); Low shocks due to lack of blades; Rare maintenance due to low cavity; Very small turbulences in the fluid; It has a low number of changes in the direction of the fluid (compared to rotors with blades); Increased efficiency at low powers.; Fast acceleration; The need for a low torque; Lack of shocks in blades at sudden starts and stops. All these elements are necessary to be able to make a small pump that could be inserted into the thoracic cavity. The pump will be designed to combat myocardial infarction. Because the pump must be inserted in the thoracic cavity, elements such as Low friction forces, shocks as low as possible, low cavitation and as little maintenance as possible are very important. The operation should be performed once, without having to change the rotor after a certain time. Given the very small size of the pump, the blades of a classic rotor would be very thin and sudden starts and stops could cause considerable damage or require a very expensive material. At the same time, being a medical procedure, the low cost is important in order to be easily accessible to the population. The lack of turbulence or vortices caused by a classic rotor is again a key element because when it comes to blood circulation, the flow must be laminar and not turbulent. The turbulent flow can even cause a heart attack. Due to these aspects, Tesla's model could be ideal for this work. Usually, the pump is considered to reach an efficiency of 40% being used for very high powers. However, the author of this type of pump claimed that the maximum efficiency that the pump can achieve is 98%. The key element that could help to achieve this efficiency or one as close as possible is the fact that the pump will be used for low volumes and pressures. The key elements to obtain the best efficiency for this model are the number of rotors placed in parallel and the distance between them. The distance between them must be small, which helps to obtain a pump as small as possible. The principle of operation of such a rotor is to place in several parallel discs cut inside. Thus the space between the discs creates the vacuum effect by pulling the liquid through the holes in the rotor and throwing it outwards. Also, a very important element is the viscosity of the liquid. It dictates the distance between the disks to achieve a lossless power flow.

Keywords: lubrication, temperature, tesla-pump, viscosity

Procedia PDF Downloads 151
897 Annular Axi-Symmetric Stagnation Flow of Electrically Conducting Fluid on a Moving Cylinder in the Presence of Axial Magnetic Field

Authors: Deva Kanta Phukan

Abstract:

An attempt is made where an electrically conducting fluid is injected from a fixed outer cylindrical casing onto an inner moving cylindrical rod. A magnetic field is applied parallel to the axis of the cylindrical rod. The basic governing set of partial differential equations for conservation of mass and momentum are reduced to a set of non-linear ordinary differential equation by introducing similarity transformation, which are integrated numerically. A perturbation solution for the case of large magnetic parameter is derived for constant Reynolds number.

Keywords: annular axi-symmetric stagnation flow, conducting fluid, magnetic field, moving cylinder

Procedia PDF Downloads 372
896 High-Frequency Half Bridge Inverter Applied to Induction Heating

Authors: Amira Zouaoui, Hamed Belloumi, Ferid Kourda

Abstract:

This paper presents the analysis and design of a DC–AC resonant converter applied to induction heating. The proposed topology based on the series-parallel half-bridge resonant inverter is described. It can operate with Zero-Voltage Switching (ZVS). At the resonant frequency, the secondary current is amplified over the heating coil with small switching angle, which keeps the reactive power low and permits heating with small current through the resonant inductor and the transformer. The operation and control principle of the proposed high frequency inverter is described and verified through simulated and experimental results.

Keywords: induction heating, inverter, high frequency, resonant

Procedia PDF Downloads 434
895 Signs-Only Compressed Row Storage Format for Exact Diagonalization Study of Quantum Fermionic Models

Authors: Michael Danilov, Sergei Iskakov, Vladimir Mazurenko

Abstract:

The present paper describes a high-performance parallel realization of an exact diagonalization solver for quantum-electron models in a shared memory computing system. The proposed algorithm contains a storage format for efficient computing eigenvalues and eigenvectors of a quantum electron Hamiltonian matrix. The results of the test calculations carried out for 15 sites Hubbard model demonstrate reduction in the required memory and good multiprocessor scalability, while maintaining performance of the same order as compressed row storage.

Keywords: sparse matrix, compressed format, Hubbard model, Anderson model

Procedia PDF Downloads 363
894 Two-Stage Flowshop Scheduling with Unsystematic Breakdowns

Authors: Fawaz Abdulmalek

Abstract:

The two-stage flowshop assembly scheduling problem is considered in this paper. There are more than one parallel machines at stage one and an assembly machine at stage two. The jobs will be processed into the flowshop based on Johnson rule and two extensions of Johnson rule. A simulation model of the two-stage flowshop is constructed where both machines at stage one are subject to random failures. Three simulation experiments will be conducted to test the effect of the three job ranking rules on the makespan. Johnson Largest heuristic outperformed both Johnson rule and Johnson Smallest heuristic for two performed experiments for all scenarios where each experiments having five scenarios.

Keywords: flowshop scheduling, random failures, johnson rule, simulation

Procedia PDF Downloads 305
893 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 89