Search results for: parallel and distributed computing
3492 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways
Authors: Anirudh Lahiri
Abstract:
Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.
Procedia PDF Downloads 433491 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform
Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu
Abstract:
Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks
Procedia PDF Downloads 2323490 Quantifying Parallelism of Vectors Is the Quantification of Distributed N-Party Entanglement
Authors: Shreya Banerjee, Prasanta K. Panigrahi
Abstract:
The three-way distributive entanglement is shown to be related to the parallelism of vectors. Using a measurement-based approach a set of 2−dimensional vectors is formed, representing the post-measurement states of one of the parties. These vectors originate at the same point and have an angular distance between them. The area spanned by a pair of such vectors is a measure of the entanglement of formation. This leads to a geometrical manifestation of the 3−tangle in 2−dimensions, from inequality in the area which generalizes for n− qubits to reveal that the n− tangle also has a planar structure. Quantifying the genuine n−party entanglement in every 1|(n − 1) bi-partition it is shown that the genuine n−way entanglement does not manifest in n− tangle. A new quantity geometrically similar to 3−tangle is then introduced that represents the genuine n− way entanglement. Extending the formalism to 3− qutrits, the nonlocality without entanglement can be seen to arise from a condition under which the post-measurement state vectors of a separable state show parallelism. A connection to nontrivial sum uncertainty relation analogous to Maccone and Pati uncertainty relation is then presented using decomposition of post-measurement state vectors along parallel and perpendicular direction of the pre-measurement state vectors. This study opens a novel way to understand multiparty entanglement in qubit and qudit systems.Keywords: Geometry of quantum entanglement, Multipartite and distributive entanglement, Parallelism of vectors , Tangle
Procedia PDF Downloads 1533489 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting
Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade
Abstract:
The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit
Procedia PDF Downloads 1653488 Operating System Based Virtualization Models in Cloud Computing
Authors: Dev Ras Pandey, Bharat Mishra, S. K. Tripathi
Abstract:
Cloud computing is ready to transform the structure of businesses and learning through supplying the real-time applications and provide an immediate help for small to medium sized businesses. The ability to run a hypervisor inside a virtual machine is important feature of virtualization and it is called nested virtualization. In today’s growing field of information technology, many of the virtualization models are available, that provide a convenient approach to implement, but decision for a single model selection is difficult. This paper explains the applications of operating system based virtualization in cloud computing with an appropriate/suitable model with their different specifications and user’s requirements. In the present paper, most popular models are selected, and the selection was based on container and hypervisor based virtualization. Selected models were compared with a wide range of user’s requirements as number of CPUs, memory size, nested virtualization supports, live migration and commercial supports, etc. and we identified a most suitable model of virtualization.Keywords: virtualization, OS based virtualization, container based virtualization, hypervisor based virtualization
Procedia PDF Downloads 3283487 Improving Security in Healthcare Applications Using Federated Learning System With Blockchain Technology
Authors: Aofan Liu, Qianqian Tan, Burra Venkata Durga Kumar
Abstract:
Data security is of the utmost importance in the healthcare area, as sensitive patient information is constantly sent around and analyzed by many different parties. The use of federated learning, which enables data to be evaluated locally on devices rather than being transferred to a central server, has emerged as a potential solution for protecting the privacy of user information. To protect against data breaches and unauthorized access, federated learning alone might not be adequate. In this context, the application of blockchain technology could provide the system extra protection. This study proposes a distributed federated learning system that is built on blockchain technology in order to enhance security in healthcare. This makes it possible for a wide variety of healthcare providers to work together on data analysis without raising concerns about the confidentiality of the data. The technical aspects of the system, including as the design and implementation of distributed learning algorithms, consensus mechanisms, and smart contracts, are also investigated as part of this process. The technique that was offered is a workable alternative that addresses concerns about the safety of healthcare while also fostering collaborative research and the interchange of data.Keywords: data privacy, distributed system, federated learning, machine learning
Procedia PDF Downloads 1313486 Image Encryption Using Eureqa to Generate an Automated Mathematical Key
Authors: Halima Adel Halim Shnishah, David Mulvaney
Abstract:
Applying traditional symmetric cryptography algorithms while computing encryption and decryption provides immunity to secret keys against different attacks. One of the popular techniques generating automated secret keys is evolutionary computing by using Eureqa API tool, which got attention in 2013. In this paper, we are generating automated secret keys for image encryption and decryption using Eureqa API (tool which is used in evolutionary computing technique). Eureqa API models pseudo-random input data obtained from a suitable source to generate secret keys. The validation of generated secret keys is investigated by performing various statistical tests (histogram, chi-square, correlation of two adjacent pixels, correlation between original and encrypted images, entropy and key sensitivity). Experimental results obtained from methods including histogram analysis, correlation coefficient, entropy and key sensitivity, show that the proposed image encryption algorithms are secure and reliable, with the potential to be adapted for secure image communication applications.Keywords: image encryption algorithms, Eureqa, statistical measurements, automated key generation
Procedia PDF Downloads 4823485 To Cloudify or Not to Cloudify
Authors: Laila Yasir Al-Harthy, Ali H. Al-Badi
Abstract:
As an emerging business model, cloud computing has been initiated to satisfy the need of organizations and to push Information Technology as a utility. The shift to the cloud has changed the way Information Technology departments are managed traditionally and has raised many concerns for both, public and private sectors. The purpose of this study is to investigate the possibility of cloud computing services replacing services provided traditionally by IT departments. Therefore, it aims to 1) explore whether organizations in Oman are ready to move to the cloud; 2) identify the deciding factors leading to the adoption or rejection of cloud computing services in Oman; and 3) provide two case studies, one for a successful Cloud provider and another for a successful adopter. This paper is based on multiple research methods including conducting a set of interviews with cloud service providers and current cloud users in Oman; and collecting data using questionnaires from experts in the field and potential users of cloud services. Despite the limitation of bandwidth capacity and Internet coverage offered in Oman that create a challenge in adopting the cloud, it was found that many information technology professionals are encouraged to move to the cloud while few are resistant to change. The recent launch of a new Omani cloud service provider and the entrance of other international cloud service providers in the Omani market make this research extremely valuable as it aims to provide real-life experience as well as two case studies on the successful provision of cloud services and the successful adoption of these services.Keywords: cloud computing, cloud deployment models, cloud service models, deciding factors
Procedia PDF Downloads 2973484 Optimal Driving Strategies for a Hybrid Street Type Motorcycle: Modelling and Control
Authors: Jhon Vargas, Gilberto Osorio-Gomez, Tatiana Manrique
Abstract:
This work presents an optimal driving strategy proposal for a 125 c.c. street-type hybrid electric motorcycle with a parallel configuration. The results presented in this article are complementary regarding the control proposal of a hybrid motorcycle. In order to carry out such developments, a representative dynamic model of the motorcycle is used, in which also are described different optimization functionalities for predetermined driving modes. The purpose is to implement an off-line optimal driving strategy which distributes energy to both engines by minimizing an objective torque requirement function. An optimal dynamic contribution is found from the optimization routine, and the optimal percentage contribution for vehicle cruise speed is implemented in the proposed online PID controller.Keywords: dynamic model, driving strategies, parallel hybrid motorcycle, PID controller, optimization
Procedia PDF Downloads 1883483 Intelligent Computing with Bayesian Regularization Artificial Neural Networks for a Nonlinear System of COVID-19 Epidemic Model for Future Generation Disease Control
Authors: Tahir Nawaz Cheema, Dumitru Baleanu, Ali Raza
Abstract:
In this research work, we design intelligent computing through Bayesian Regularization artificial neural networks (BRANNs) introduced to solve the mathematical modeling of infectious diseases (Covid-19). The dynamical transmission is due to the interaction of people and its mathematical representation based on the system's nonlinear differential equations. The generation of the dataset of the Covid-19 model is exploited by the power of the explicit Runge Kutta method for different countries of the world like India, Pakistan, Italy, and many more. The generated dataset is approximately used for training, testing, and validation processes for every frequent update in Bayesian Regularization backpropagation for numerical behavior of the dynamics of the Covid-19 model. The performance and effectiveness of designed methodology BRANNs are checked through mean squared error, error histograms, numerical solutions, absolute error, and regression analysis.Keywords: mathematical models, beysian regularization, bayesian-regularization backpropagation networks, regression analysis, numerical computing
Procedia PDF Downloads 1463482 Design and Analysis of Metamaterial Based Vertical Cavity Surface Emitting Laser
Authors: Ishraq M. Anjum
Abstract:
Distributed Bragg reflectors are used in vertical-cavity surface-emitting lasers (VCSELs) in order to achieve very high reflectivity. Use of metamaterial in place of distributed Bragg reflector can reduce the device size significantly. A silicon-based metamaterial near perfect reflector is designed to be used in place of distributed Bragg reflectors in VCSELs. Mie resonance in dielectric microparticles is exploited in order to design the metamaterial. A reflectivity of 98.31% is achieved using finite-difference time-domain method. An 808nm double intra-cavity contacted VCSEL structure with 1.5 λ cavity is proposed using this metamaterial near perfect reflector. The active region is designed to be composed of seven GaAs/AlGaAs quantum wells. Upon numerical investigation of the designed VCSEL structure, the threshold current is found to be 2.96 mA at an aperture of 40 square micrometers and the maximum output power is found to be 71 mW at a current of 141 mA. Miniaturization of conventional VCSELs is possible using this design.Keywords: GaAs, LASER, metamaterial, VCSEL, vertical cavity surface emitting laser
Procedia PDF Downloads 1823481 The Challenges of Scaling Agile to Large-Scale Distributed Development: An Overview of the Agile Factory Model
Authors: Bernard Doherty, Andrew Jelfs, Aveek Dasgupta, Patrick Holden
Abstract:
Many companies have moved to agile and hybrid agile methodologies where portions of the Software Design Life-cycle (SDLC) and Software Test Life-cycle (STLC) can be time boxed in order to enhance delivery speed, quality and to increase flexibility to changes in software requirements. Despite widespread proliferation of agile practices, implementation often fails due to lack of adequate project management support, decreased motivation or fear of increased interaction. Consequently, few organizations effectively adopt agile processes with tailoring often required to integrate agile methodology in large scale environments. This paper provides an overview of the challenges in implementing an innovative large-scale tailored realization of the agile methodology termed the Agile Factory Model (AFM), with the aim of comparing and contrasting issues of specific importance to organizations undertaking large scale agile development. The conclusions demonstrate that agile practices can be effectively translated to a globally distributed development environment.Keywords: agile, agile factory model, globally distributed development, large-scale agile
Procedia PDF Downloads 2943480 Direct Translation vs. Pivot Language Translation for Persian-Spanish Low-Resourced Statistical Machine Translation System
Authors: Benyamin Ahmadnia, Javier Serrano
Abstract:
In this paper we compare two different approaches for translating from Persian to Spanish, as a language pair with scarce parallel corpus. The first approach involves direct transfer using an statistical machine translation system, which is available for this language pair. The second approach involves translation through English, as a pivot language, which has more translation resources and more advanced translation systems available. The results show that, it is possible to achieve better translation quality using English as a pivot language in either approach outperforms direct translation from Persian to Spanish. Our best result is the pivot system which scores higher than direct translation by (1.12) BLEU points.Keywords: statistical machine translation, direct translation approach, pivot language translation approach, parallel corpus
Procedia PDF Downloads 4873479 Computing the Similarity and the Diversity in the Species Based on Cronobacter Genome
Authors: E. Al Daoud
Abstract:
The purpose of computing the similarity and the diversity in the species is to trace the process of evolution and to find the relationship between the species and discover the unique, the special, the common and the universal proteins. The proteins of the whole genome of 40 species are compared with the cronobacter genome which is used as reference genome. More than 3 billion pairwise alignments are performed using blastp. Several findings are introduced in this study, for example, we found 172 proteins in cronobacter genome which have insignificant hits in other species, 116 significant proteins in the all tested species with very high score value and 129 common proteins in the plants but have insignificant hits in mammals, birds, fishes, and insects.Keywords: genome, species, blastp, conserved genes, Cronobacter
Procedia PDF Downloads 4963478 Memory and Narratives Rereading before and after One Week
Authors: Abigail M. Csik, Gabriel A. Radvansky
Abstract:
As people read through event-based narratives, they construct an event model that captures information about the characters, goals, location, time, and causality. For many reasons, memory for such narratives is represented at different levels, namely, the surface form, textbase, and event model levels. Rereading has been shown to decrease surface form memory, while, at the same time, increasing textbase and event model memories. More generally, distributed practice has consistently shown memory benefits over massed practice for different types of materials, including texts. However, little research has investigated distributed practice of narratives at different inter-study intervals and these effects on these three levels of memory. Recent work in our lab has indicated that there may be dramatic changes in patterns of forgetting around one week, which may affect the three levels of memory. The present experiment aimed to determine the effects of rereading on the three levels of memory as a factor of whether the texts were reread before versus after one week. Participants (N = 42) read a set of stories, re-read them either before or after one week (with an inter-study interval of three days, seven days, or fourteen days), and then took a recognition test, from which the three levels of representation were derived. Signal detection results from this study reveal that differential patterns at the three levels as a factor of whether the narratives were re-read prior to one week or after one week. In particular, an ANOVA revealed that surface form memory was lower (p = .08) while textbase (p = .02) and event model memory (p = .04) were greater if narratives were re-read 14 days later compared to memory when narratives were re-read 3 days later. These results have implications for what type of memory benefits from distributed practice at various inter-study intervals.Keywords: memory, event cognition, distributed practice, consolidation
Procedia PDF Downloads 2253477 Using Photogrammetric Techniques to Map the Mars Surface
Authors: Ahmed Elaksher, Islam Omar
Abstract:
For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.Keywords: mars, photogrammetry, MOLA, HiRISE
Procedia PDF Downloads 573476 Software Transactional Memory in a Dynamic Programming Language at Virtual Machine Level
Authors: Szu-Kai Hsu, Po-Ching Lin
Abstract:
As more and more multi-core processors emerge, traditional sequential programming paradigm no longer suffice. Yet only few modern dynamic programming languages can leverage such advantage. Ruby, for example, despite its wide adoption, only includes threads as a simple parallel primitive. The global virtual machine lock of official Ruby runtime makes it impossible to exploit full parallelism. Though various alternative Ruby implementations do eliminate the global virtual machine lock, they only provide developers dated locking mechanism for data synchronization. However, traditional locking mechanism error-prone by nature. Software Transactional Memory is one of the promising alternatives among others. This paper introduces a new virtual machine: GobiesVM to provide a native software transactional memory based solution for dynamic programming languages to exploit parallelism. We also proposed a simplified variation of Transactional Locking II algorithm. The empirical results of our experiments show that support of STM at virtual machine level enables developers to write straightforward code without compromising parallelism or sacrificing thread safety. Existing source code only requires minimal or even none modi cation, which allows developers to easily switch their legacy codebase to a parallel environment. The performance evaluations of GobiesVM also indicate the difference between sequential and parallel execution is significant.Keywords: global interpreter lock, ruby, software transactional memory, virtual machine
Procedia PDF Downloads 2853475 Cloud Monitoring and Performance Optimization Ensuring High Availability
Authors: Inayat Ur Rehman, Georgia Sakellari
Abstract:
Cloud computing has evolved into a vital technology for businesses, offering scalability, flexibility, and cost-effectiveness. However, maintaining high availability and optimal performance in the cloud is crucial for reliable services. This paper explores the significance of cloud monitoring and performance optimization in sustaining the high availability of cloud-based systems. It discusses diverse monitoring tools, techniques, and best practices for continually assessing the health and performance of cloud resources. The paper also delves into performance optimization strategies, including resource allocation, load balancing, and auto-scaling, to ensure efficient resource utilization and responsiveness. Addressing potential challenges in cloud monitoring and optimization, the paper offers insights into data security and privacy considerations. Through this thorough analysis, the paper aims to underscore the importance of cloud monitoring and performance optimization for ensuring a seamless and highly available cloud computing environment.Keywords: cloud computing, cloud monitoring, performance optimization, high availability, scalability, resource allocation, load balancing, auto-scaling, data security, data privacy
Procedia PDF Downloads 593474 Key Concepts of 5th Generation Mobile Technology
Authors: Magri Hicham, Noreddine Abghour, Mohamed Ouzzif
Abstract:
The 5th generation of mobile networks is term used in various research papers and projects to identify the next major phase of mobile telecommunications standards. 5G wireless networks will support higher peak data rate, lower latency and provide best connections with QoS guarenty. In this article, we discuss various promising technologies for 5G wireless communication systems, such as IPv6 support, World Wide Wireless Web (WWWW), Dynamic Adhoc Wireless Networks (DAWN), BEAM DIVISION MULTIPLE ACCESS (BDMA), Cloud Computing and cognitive radio technology.Keywords: WWWW, BDMA, DAWN, 5G, 4G, IPv6, Cloud Computing
Procedia PDF Downloads 5143473 An Efficient Architecture for Dynamic Customization and Provisioning of Virtual Appliance in Cloud Environment
Authors: Rajendar Kandan, Mohammad Zakaria Alli, Hong Ong
Abstract:
Cloud computing is a business model which provides an easier management of computing resources. Cloud users can request virtual machine and install additional softwares and configure them if needed. However, user can also request virtual appliance which provides a better solution to deploy application in much faster time, as it is ready-built image of operating system with necessary softwares installed and configured. Large numbers of virtual appliances are available in different image format. User can download available appliances from public marketplace and start using it. However, information published about the virtual appliance differs from each providers leading to the difficulty in choosing required virtual appliance as it is composed of specific OS with standard software version. However, even if user choses the appliance from respective providers, user doesn’t have any flexibility to choose their own set of softwares with required OS and application. In this paper, we propose a referenced architecture for dynamically customizing virtual appliance and provision them in an easier manner. We also add our experience in integrating our proposed architecture with public marketplace and Mi-Cloud, a cloud management software.Keywords: cloud computing, marketplace, virtualization, virtual appliance
Procedia PDF Downloads 2933472 Computing Continuous Skyline Queries without Discriminating between Static and Dynamic Attributes
Authors: Ibrahim Gomaa, Hoda M. O. Mokhtar
Abstract:
Although most of the existing skyline queries algorithms focused basically on querying static points through static databases; with the expanding number of sensors, wireless communications and mobile applications, the demand for continuous skyline queries has increased. Unlike traditional skyline queries which only consider static attributes, continuous skyline queries include dynamic attributes, as well as the static ones. However, as skyline queries computation is based on checking the domination of skyline points over all dimensions, considering both the static and dynamic attributes without separation is required. In this paper, we present an efficient algorithm for computing continuous skyline queries without discriminating between static and dynamic attributes. Our algorithm in brief proceeds as follows: First, it excludes the points which will not be in the initial skyline result; this pruning phase reduces the required number of comparisons. Second, the association between the spatial positions of data points is examined; this phase gives an idea of where changes in the result might occur and consequently enables us to efficiently update the skyline result (continuous update) rather than computing the skyline from scratch. Finally, experimental evaluation is provided which demonstrates the accuracy, performance and efficiency of our algorithm over other existing approaches.Keywords: continuous query processing, dynamic database, moving object, skyline queries
Procedia PDF Downloads 2103471 Impact of Coccidia on Mortality and Weight Growth in Japanese Quail Coturnix japonica (Aves, Phasianidae) in Algeria
Authors: Amina Smai, Fairouz Haddadj, Habiba Saadi-Idouhar, Meriem Aissi, Safia Zenia, Salaheddine Doumandji
Abstract:
Coccidiosis is a very common intestinal parasitic disease caused by a worldwide distributed protozoan of the genus Eimeria. This disease is very common in young birds beyond the second week of life, especially in land-based breeding. The study was carried out in a hunting center of Zeralda located in the north-east of Algiers. The objective of our work is to study the evolution of coccidiosis in quails from 1 to 35 days old by collecting their droppings daily. These are analyzed in the laboratory using the flotation method and the Mac Master one to count coccidia. Weight changes are taken into account as well as mortality in parallel with certain zootechnical parameters such as density. The species of coccidia recovered is Eimeria coturnicis. The results showed that there is an average evolution of mortality of individuals with a rate of 13.33% due to the presence of coccidia with a significant regression (p=0.031). The weight of the quails increases with the age of the animal with a rapid growth rate from the 3rd week onwards. Indeed, the statistical analysis reveals that the evolution of the number did not affect the evolution of the weight (p=0.70) and the GMQ (R=0.52).Keywords: coccidiosis, Coturnix japonica, daily average gain, weight
Procedia PDF Downloads 1823470 Brain Computer Interface Implementation for Affective Computing Sensing: Classifiers Comparison
Authors: Ramón Aparicio-García, Gustavo Juárez Gracia, Jesús Álvarez Cedillo
Abstract:
A research line of the computer science that involve the study of the Human-Computer Interaction (HCI), which search to recognize and interpret the user intent by the storage and the subsequent analysis of the electrical signals of the brain, for using them in the control of electronic devices. On the other hand, the affective computing research applies the human emotions in the HCI process helping to reduce the user frustration. This paper shows the results obtained during the hardware and software development of a Brain Computer Interface (BCI) capable of recognizing the human emotions through the association of the brain electrical activity patterns. The hardware involves the sensing stage and analogical-digital conversion. The interface software involves algorithms for pre-processing of the signal in time and frequency analysis and the classification of patterns associated with the electrical brain activity. The methods used for the analysis and classification of the signal have been tested separately, by using a database that is accessible to the public, besides to a comparison among classifiers in order to know the best performing.Keywords: affective computing, interface, brain, intelligent interaction
Procedia PDF Downloads 3883469 Novel Coprocessor for DNA Sequence Alignment in Resequencing Applications
Authors: Atef Ibrahim, Hamed Elsimary, Abdullah Aljumah, Fayez Gebali
Abstract:
This paper presents a novel semi-systolic array architecture for an optimized parallel sequence alignment algorithm. This architecture has the advantage that it can be modified to be reused for multiple pass processing in order to increase the number of processing elements that can be packed into a single FPGA and to increase the number of sequences that can be aligned in parallel in a single FPGA. This resolves the potential problem of many FPGA resources left unused for designs that have large values of short read length. When using the previously published conventional hardware design. FPGA implementation results show that, for large values of short read lengths (M>128), the proposed design has a slightly higher speed up and FPGA utilization over the the conventional one.Keywords: bioinformatics, genome sequence alignment, re-sequencing applications, systolic array
Procedia PDF Downloads 5313468 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 1663467 The Application of Distributed Optical Strain Sensing to Measure Rock Bolt Deformation Subject to Bedding Shear
Authors: Thomas P. Roper, Brad Forbes, Jurij Karlovšek
Abstract:
Shear displacement along bedding defects is a well-recognised behaviour when tunnelling and mining in stratified rock. This deformation can affect the durability and integrity of installed rock bolts. In-situ monitoring of rock bolt deformation under bedding shear cannot be accurately derived from traditional strain gauge bolts as sensors are too large and spaced too far apart to accurately assess concentrated displacement along discrete defects. A possible solution to this is the use of fiber optic technologies developed for precision monitoring. Distributed Optic Sensor (DOS) embedded rock bolts were installed in a tunnel project with the aim of measuring the bolt deformation profile under significant shear displacements. This technology successfully measured the 3D strain distribution along the bolts when subjected to bedding shear and resolved the axial and lateral strain constituents in order to determine the deformational geometry of the bolts. The results are compared well with the current visual method for monitoring shear displacement using borescope holes, considering this method as suitable.Keywords: distributed optical strain sensing, rock bolt, bedding shear, sandstone tunnel
Procedia PDF Downloads 1613466 Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units
Authors: Pooya Niksiar, Ali Ashrafizadeh, Mehrzad Shams, Amir Hossein Madani
Abstract:
There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too.Keywords: CFD, Eulerian formulation, graphics processing units, Lagrangian formulation
Procedia PDF Downloads 4163465 High Performance Field Programmable Gate Array-Based Stochastic Low-Density Parity-Check Decoder Design for IEEE 802.3an Standard
Authors: Ghania Zerari, Abderrezak Guessoum, Rachid Beguenane
Abstract:
This paper introduces high-performance architecture for fully parallel stochastic Low-Density Parity-Check (LDPC) field programmable gate array (FPGA) based LDPC decoder. The new approach is designed to decrease the decoding latency and to reduce the FPGA logic utilisation. To accomplish the target logic utilisation reduction, the routing of the proposed sub-variable node (VN) internal memory is designed to utilize one slice distributed RAM. Furthermore, a VN initialization, using the channel input probability, is achieved to enhance the decoder convergence, without extra resources and without integrating the output saturated-counters. The Xilinx FPGA implementation, of IEEE 802.3an standard LDPC code, shows that the proposed decoding approach attain high performance along with reduction of FPGA logic utilisation.Keywords: low-density parity-check (LDPC) decoder, stochastic decoding, field programmable gate array (FPGA), IEEE 802.3an standard
Procedia PDF Downloads 2973464 Robot Spatial Reasoning via 3D Models
Authors: John Allard, Alex Rich, Iris Aguilar, Zachary Dodds
Abstract:
With this paper we present several experiences deploying novel, low-cost resources for computing with 3D spatial models. Certainly, computing with 3D models undergirds some of our field’s most important contributions to the human experience. Most often, those are contrived artifacts. This work extends that tradition by focusing on novel resources that deliver uncontrived models of a system’s current surroundings. Atop this new capability, we present several projects investigating the student-accessibility of the computational tools for reasoning about the 3D space around us. We conclude that, with current scaffolding, real-world 3D models are now an accessible and viable foundation for creative computational work.Keywords: 3D vision, matterport model, real-world 3D models, mathematical and computational methods
Procedia PDF Downloads 5363463 Modeling of Virtual Power Plant
Authors: Muhammad Fanseem E. M., Rama Satya Satish Kumar, Indrajeet Bhausaheb Bhavar, Deepak M.
Abstract:
Keeping the right balance of electricity between the supply and demand sides of the grid is one of the most important objectives of electrical grid operation. Power generation and demand forecasting are the core of power management and generation scheduling. Large, centralized producing units were used in the construction of conventional power systems in the past. A certain level of balance was possible since the generation kept up with the power demand. However, integrating renewable energy sources into power networks has proven to be a difficult challenge due to its intermittent nature. The power imbalance caused by rising demands and peak loads is negatively affecting power quality and dependability. Demand side management and demand response were one of the solutions, keeping generation the same but altering or rescheduling or shedding completely the load or demand. However, shedding the load or rescheduling is not an efficient way. There comes the significance of virtual power plants. The virtual power plant integrates distributed generation, dispatchable load, and distributed energy storage organically by using complementing control approaches and communication technologies. This would eventually increase the utilization rate and financial advantages of distributed energy resources. Most of the writing on virtual power plant models ignored technical limitations, and modeling was done in favor of a financial or commercial viewpoint. Therefore, this paper aims to address the modeling intricacies of VPPs and their technical limitations, shedding light on a holistic understanding of this innovative power management approach.Keywords: cost optimization, distributed energy resources, dynamic modeling, model quality tests, power system modeling
Procedia PDF Downloads 62