Search results for: approximate computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1302

Search results for: approximate computing

672 Case-Based Reasoning Approach for Process Planning of Internal Thread Cold Extrusion

Authors: D. Zhang, H. Y. Du, G. W. Li, J. Zeng, D. W. Zuo, Y. P. You

Abstract:

For the difficult issues of process selection, case-based reasoning technology is applied to computer aided process planning system for cold form tapping of internal threads on the basis of similarity in the process. A model is established based on the analysis of process planning. Case representation and similarity computing method are given. Confidence degree is used to evaluate the case. Rule-based reuse strategy is presented. The scheme is illustrated and verified by practical application. The case shows the design results with the proposed method are effective.

Keywords: case-based reasoning, internal thread, cold extrusion, process planning

Procedia PDF Downloads 495
671 A Machine Learning Approach for the Leakage Classification in the Hydraulic Final Test

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

The widespread use of machine learning applications in production is significantly accelerated by improved computing power and increasing data availability. Predictive quality enables the assurance of product quality by using machine learning models as a basis for decisions on test results. The use of real Bosch production data based on geometric gauge blocks from machining, mating data from assembly and hydraulic measurement data from final testing of directional valves is a promising approach to classifying the quality characteristics of workpieces.

Keywords: machine learning, classification, predictive quality, hydraulics, supervised learning

Procedia PDF Downloads 196
670 Adobe Attenuation Coefficient Determination and Its Comparison with Other Shielding Materials for Energies Found in Common X-Rays Procedures

Authors: Camarena Rodriguez C. S., Portocarrero Bonifaz A., Palma Esparza R., Romero Carlos N. A.

Abstract:

Adobe is a construction material that fulfills the same function as a conventional brick. Widely used since ancient times, it is present in an appreciable percentage of buildings in Latin America. Adobe is a mixture of clay and sand. The interest in the study of the properties of this material arises due to its presence in the infrastructure of hospital´s radiological services, located in places with low economic resources, for the attenuation of radiation. Some materials such as lead and concrete are the most used for shielding and are widely studied in the literature. The present study will determine the mass attenuation coefficient of Adobe. The minimum required thicknesses for the primary and secondary barriers will be estimated for the shielding of radiological facilities where conventional and dental X-rays are performed. For the experimental procedure, an X-ray source emitted direct radiation towards different thicknesses of an Adobe barrier, and a detector was placed on the other side. For this purpose, an UNFORS Xi solid state detector was used, which collected information on the difference of radiation intensity. The initial parameters of the exposure started at 45 kV; and then the tube tension was varied in increments of 5 kV, reaching a maximum of 125 kV. The X-Ray tube was positioned at a distance of 0.5 m from the surface of the Adobe bricks, and the collimation of the radiation beam was set for an area of 0.15 m x 0.15 m. Finally, mathematical methods were applied to determine the mass attenuation coefficient for different energy ranges. In conclusion, the mass attenuation coefficient for Adobe was determined and the approximate thicknesses of the most common Adobe barriers in the hospital buildings were calculated for their later application in the radiological protection.

Keywords: Adobe, attenuation coefficient, radiological protection, shielding, x-rays

Procedia PDF Downloads 149
669 Unconventional Calculus Spreadsheet Functions

Authors: Chahid K. Ghaddar

Abstract:

The spreadsheet engine is exploited via a non-conventional mechanism to enable novel worksheet solver functions for computational calculus. The solver functions bypass inherent restrictions on built-in math and user defined functions by taking variable formulas as a new type of argument while retaining purity and recursion properties. The enabling mechanism permits integration of numerical algorithms into worksheet functions for solving virtually any computational problem that can be modelled by formulas and variables. Several examples are presented for computing integrals, derivatives, and systems of deferential-algebraic equations. Incorporation of the worksheet solver functions with the ubiquitous spreadsheet extend the utility of the latter as a powerful tool for computational mathematics.

Keywords: calculus, differential algebraic equations, solvers, spreadsheet

Procedia PDF Downloads 344
668 Views from Shores Past: Palaeogeographic Reconstructions as an Aid for Interpreting the Movement of Early Modern Humans on and between the Islands of Wallacea

Authors: S. Kealy, J. Louys, S. O’Connor

Abstract:

The island archipelago that stretches between the continents of Sunda (Southeast Asia) and Sahul (Australia - New Guinea) and comprising much of modern-day Indonesia as well as Timor-Leste, represents the biogeographic region of Wallacea. The islands of Wallaea are significant archaeologically as they have never been connected to the mainlands of either Sunda or Sahul, and thus the colonization by early modern humans of these islands and subsequently Australia and New Guinea, would have necessitated some form of water crossings. Accurate palaeogeographic reconstructions of the Wallacean Archipelago for this time are important not only for modeling likely routes of colonization but also for reconstructing likely landscapes and hence resources available to the first colonists. Here we present five digital reconstructions of coastal outlines of Wallacea and Sahul (Australia and New Guinea) for the periods 65, 60, 55, 50, and 45,000 years ago using the latest bathometric chart and a sea-level model that is adjusted to account for the average uplift rate known from Wallacea. This data was also used to reconstructed island areal extent as well as topography for each time period. These reconstructions allowed us to determine the distance from the coast and relative elevation of the earliest archaeological sites for each island where such records exist. This enabled us to approximate how much effort exploitation of coastal resources would have taken for early colonists, and how important such resources were. These reconstructions also allowed us to estimate visibility for each island in the archipelago, and to model how intervisible each island was during the period of likely human colonisation. We demonstrate how these models provide archaeologists with an important basis for visualising this ancient landscape and interpreting how it was originally viewed, traversed and exploited by its earliest modern human inhabitants.

Keywords: Wallacea, palaeogeographic reconstructions, islands, intervisibility

Procedia PDF Downloads 192
667 Organic Matter Removal in Urban and Agroindustry Wastewater by Chemical Precipitation Process

Authors: Karina Santos Silvério, Fátima Carvalho, Maria Adelaide Almeida

Abstract:

The impacts caused by anthropogenic actions on the water environment have been one of the main challenges of modern society. Population growth, added to water scarcity and climate change, points to a need to increase the resilience of production systems to increase efficiency regarding the management of wastewater generated in the different processes. Based on this context, the study developed under the NETA project (New Strategies in Wastewater Treatment) aimed to evaluate the efficiency of the Chemical Precipitation Process (CPP), using the hydrated lime (Ca(OH )₂) as a reagent in wastewater from the agroindustry sector, namely swine wastewater, slaughterhouse and urban wastewater, in order to make the productive means 100% circular, causing a direct positive impact on the environment. The purpose of CPP is to innovate in the field of effluent treatment technologies, as it allows rapid application and is economically profitable. In summary, the study was divided into four main stages: 1) Application of the reagent in a single step, raising the pH to 12.5 2) Obtaining sludge and treated effluent. 3) Natural neutralization of the effluent through Carbonation using atmospheric CO₂. 4) Characterization and evaluation of the feasibility of the chemical precipitation technique in the treatment of different wastewaters through the technique of determining the chemical oxygen demand (COD) and other supporting physical-chemical parameters. The results showed an approximate average removal efficiency above 80% for all effluents, highlighting the swine effluent with 90% removal, followed by urban effluent with 88% and slaughterhouse with 81% on average. Significant improvement was also obtained with regard to color and odor removal after Carbonation to pH 8.00.

Keywords: agroindustry wastewater, urban wastewater, natural carbonatation, chemical precipitation technique

Procedia PDF Downloads 66
666 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 108
665 Epilepsy Seizure Prediction by Effective Connectivity Estimation Using Granger Causality and Directed Transfer Function Analysis of Multi-Channel Electroencephalogram

Authors: Mona Hejazi, Ali Motie Nasrabadi

Abstract:

Epilepsy is a persistent neurological disorder that affects more than 50 million people worldwide. Hence, there is a necessity to introduce an efficient prediction model for making a correct diagnosis of the epileptic seizure and accurate prediction of its type. In this study we consider how the Effective Connectivity (EC) patterns obtained from intracranial Electroencephalographic (EEG) recordings reveal information about the dynamics of the epileptic brain and can be used to predict imminent seizures, as this will enable the patients (and caregivers) to take appropriate precautions. We use this definition because we believe that effective connectivity near seizures begin to change, so we can predict seizures according to this feature. Results are reported on the standard Freiburg EEG dataset which contains data from 21 patients suffering from medically intractable focal epilepsy. Six channels of EEG from each patients are considered and effective connectivity using Directed Transfer Function (DTF) and Granger Causality (GC) methods is estimated. We concentrate on effective connectivity standard deviation over time and feature changes in five brain frequency sub-bands (Alpha, Beta, Theta, Delta, and Gamma) are compared. The performance obtained for the proposed scheme in predicting seizures is: average prediction time is 50 minutes before seizure onset, the maximum sensitivity is approximate ~80% and the false positive rate is 0.33 FP/h. DTF method is more acceptable to predict epileptic seizures and generally we can observe that the greater results are in gamma and beta sub-bands. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.

Keywords: effective connectivity, Granger causality, directed transfer function, epilepsy seizure prediction, EEG

Procedia PDF Downloads 449
664 Design of Cloud Service Brokerage System Intermediating Integrated Services in Multiple Cloud Environment

Authors: Dongjae Kang, Sokho Son, Jinmee Kim

Abstract:

Cloud service brokering is a new service paradigm that provides interoperability and portability of application across multiple Cloud providers. In this paper, we designed cloud service brokerage system, any broker, supporting integrated service provisioning and SLA based service life cycle management. For the system design, we introduce the system concept and whole architecture, details of main components and use cases of primary operations in the system. These features ease the Cloud service provider and customer’s concern and support new Cloud service open market to increase cloud service profit and prompt Cloud service echo system in cloud computing related area.

Keywords: cloud service brokerage, multiple Clouds, Integrated service provisioning, SLA, network service

Procedia PDF Downloads 474
663 Comparative Study of Scheduling Algorithms for LTE Networks

Authors: Samia Dardouri, Ridha Bouallegue

Abstract:

Scheduling is the process of dynamically allocating physical resources to User Equipment (UE) based on scheduling algorithms implemented at the LTE base station. Various algorithms have been proposed by network researchers as the implementation of scheduling algorithm which represents an open issue in Long Term Evolution (LTE) standard. This paper makes an attempt to study and compare the performance of PF, MLWDF and EXP/PF scheduling algorithms. The evaluation is considered for a single cell with interference scenario for different flows such as Best effort, Video and VoIP in a pedestrian and vehicular environment using the LTE-Sim network simulator. The comparative study is conducted in terms of system throughput, fairness index, delay, packet loss ratio (PLR) and total cell spectral efficiency.

Keywords: LTE, multimedia flows, scheduling algorithms, mobile computing

Procedia PDF Downloads 370
662 Developement of a New Wearable Device for Automatic Guidance Service

Authors: Dawei Cai

Abstract:

In this paper, we present a new wearable device that provide an automatic guidance servie for visitors. By combining the position information from NFC and the orientation information from a 6 axis acceleration and terrestrial magnetism sensor, the head's direction can be calculated. We developed an algorithm to calculate the device orientation based on the data from acceleration and terrestrial magnetism sensor. If visitors want to know some explanation about an exhibit in front of him, what he has to do is just lift up his mobile device. The identification program will automatically identify the status based on the information from NFC and MEMS, and start playing explanation content for him. This service may be convenient for old people or disables or children.

Keywords: wearable device, ubiquitous computing, guide sysem, MEMS sensor, NFC

Procedia PDF Downloads 411
661 A Low-Area Fully-Reconfigurable Hardware Design of Fast Fourier Transform System for 3GPP-LTE Standard

Authors: Xin-Yu Shih, Yue-Qu Liu, Hong-Ru Chou

Abstract:

This paper presents a low-area and fully-reconfigurable Fast Fourier Transform (FFT) hardware design for 3GPP-LTE communication standard. It can fully support 32 different FFT sizes, up to 2048 FFT points. Besides, a special processing element is developed for making reconfigurable computing characteristics possible, while first-in first-out (FIFO) scheduling scheme design technique is proposed for hardware-friendly FIFO resource arranging. In a synthesis chip realization via TSMC 40 nm CMOS technology, the hardware circuit only occupies core area of 0.2325 mm2 and dissipates 233.5 mW at maximal operating frequency of 250 MHz.

Keywords: reconfigurable, fast Fourier transform (FFT), single-path delay feedback (SDF), 3GPP-LTE

Procedia PDF Downloads 266
660 Split Monotone Inclusion and Fixed Point Problems in Real Hilbert Spaces

Authors: Francis O. Nwawuru

Abstract:

The convergence analysis of split monotone inclusion problems and fixed point problems of certain nonlinear mappings are investigated in the setting of real Hilbert spaces. Inertial extrapolation term in the spirit of Polyak is incorporated to speed up the rate of convergence. Under standard assumptions, a strong convergence of the proposed algorithm is established without computing the resolvent operator or involving Yosida approximation method. The stepsize involved in the algorithm does not depend on the spectral radius of the linear operator. Furthermore, applications of the proposed algorithm in solving some related optimization problems are also considered. Our result complements and extends numerous results in the literature.

Keywords: fixedpoint, hilbertspace, monotonemapping, resolventoperators

Procedia PDF Downloads 38
659 Confidence Intervals for Process Capability Indices for Autocorrelated Data

Authors: Jane A. Luke

Abstract:

Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.

Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes

Procedia PDF Downloads 373
658 Analyzing the Impact of DCF and PCF on WLAN Network Standards 802.11a, 802.11b, and 802.11g

Authors: Amandeep Singh Dhaliwal

Abstract:

Networking solutions, particularly wireless local area networks have revolutionized the technological advancement. Wireless Local Area Networks (WLANs) have gained a lot of popularity as they provide location-independent network access between computing devices. There are a number of access methods used in Wireless Networks among which DCF and PCF are the fundamental access methods. This paper emphasizes on the impact of DCF and PCF access mechanisms on the performance of the IEEE 802.11a, 802.11b and 802.11g standards. On the basis of various parameters viz. throughput, delay, load etc performance is evaluated between these three standards using above mentioned access mechanisms. Analysis revealed a superior throughput performance with low delays for 802.11g standard as compared to 802.11 a/b standard using both DCF and PCF access methods.

Keywords: DCF, IEEE, PCF, WLAN

Procedia PDF Downloads 411
657 A Common Automated Programming Platform for Knowledge Based Software Engineering

Authors: Ivan Stanev, Maria Koleva

Abstract:

A common platform for automated programming (CPAP) is defined in details. Two versions of CPAP are described: Cloud-based (including the set of components for classic programming, and the set of components for combined programming) and KBASE based (including the set of components for automated programming, and the set of components for ontology programming). Four KBASE products (module for automated programming of robots, intelligent product manual, intelligent document display, and intelligent form generator) are analyzed and CPAP contributions to automated programming are presented.

Keywords: automated programming, cloud computing, knowledge based software engineering, service oriented architecture

Procedia PDF Downloads 330
656 An Efficient Automated Radiation Measuring System for Plasma Monopole Antenna

Authors: Gurkirandeep Kaur, Rana Pratap Yadav

Abstract:

This experimental study is aimed to examine the radiation characteristics of different plasma structures of a surface wave-driven plasma antenna by an automated measuring system. In this study, a 30 cm long plasma column of argon gas with a diameter of 3 cm is excited by surface wave discharge mechanism operating at 13.56 MHz with RF power level up to 100 Watts and gas pressure between 0.01 to 0.05 mb. The study reveals that a single structured plasma monopole can be modified into an array of plasma antenna elements by forming multiple striations or plasma blobs inside the discharge tube by altering the values of plasma properties such as working pressure, operating frequency, input RF power, discharge tube dimensions, i.e., length, radius, and thickness. It is also reported that plasma length, electron density, and conductivity are functions of operating plasma parameters and controlled by changing working pressure and input power. To investigate the antenna radiation efficiency for the far-field region, an automation-based radiation measuring system has been fabricated and presented in detail. This developed automated system involves a combined setup of controller, dc servo motors, vector network analyzer, and computing device to evaluate the radiation intensity, directivity, gain and efficiency of plasma antenna. In this system, the controller is connected to multiple motors for moving aluminum shafts in both elevation and azimuthal plane whereas radiation from plasma monopole antenna is measured by a Vector Network Analyser (VNA) which is further wired up with the computing device to display radiations in polar plot forms. Here, the radiation characteristics of both continuous and array plasma monopole antenna have been studied for various working plasma parameters. The experimental results clearly indicate that the plasma antenna is as efficient as a metallic antenna. The radiation from plasma monopole antenna is significantly influenced by plasma properties which provides a wider range in radiation pattern where desired radiation parameters like beam-width, the direction of radiation, radiation intensity, antenna efficiency, etc. can be achieved in a single monopole. Due to its wide range of selectivity in radiation pattern; this can meet the demands of wider bandwidth to get high data speed in communication systems. Moreover, this developed system provides an efficient and cost-effective solution for measuring the radiation pattern in far-field zone for any kind of antenna system.

Keywords: antenna radiation characteristics, dynamically reconfigurable, plasma antenna, plasma column, plasma striations, surface wave

Procedia PDF Downloads 108
655 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 122
654 Molecular Dynamics Simulation on Nanoelectromechanical Graphene Nanoflake Shuttle Device

Authors: Eunae Lee, Oh-Kuen Kwon, Ki-Sub Kim, Jeong Won Kang

Abstract:

We investigated the dynamic properties of graphene-nanoribbon (GNR) memory encapsulating graphene-nanoflake (GNF) shuttle in the potential to be applicable as a non-volatile random access memory via molecular dynamics simulations. This work explicitly demonstrates that the GNR encapsulating the GNF shuttle can be applied to nonvolatile memory. The potential well was originated by the increase of the attractive vdW energy between the GNRs when the GNF approached the edges of the GNRs. So the bistable positions were located near the edges of the GNRs. Such a nanoelectromechanical non-volatile memory based on graphene is also applicable to the development of switches, sensors, and quantum computing.

Keywords: graphene nanoribbon, graphene nanoflake, shuttle memory, molecular dynamics

Procedia PDF Downloads 443
653 Alexa (Machine Learning) in Artificial Intelligence

Authors: Loulwah Bokhari, Jori Nazer, Hala Sultan

Abstract:

Nowadays, artificial intelligence (AI) is used as a foundation for many activities in modern computing applications at home, in vehicles, and in businesses. Many modern machines are built to carry out a specific activity or purpose. This is where the Amazon Alexa application comes in, as it is used as a virtual assistant. The purpose of this paper is to explore the use of Amazon Alexa among people and how it has improved and made simple daily tasks easier for many people. We gave our participants several questions regarding Amazon Alexa and if they had recently used or heard of it, as well as the different tasks it provides and whether it successfully satisfied their needs. Overall, we found that participants who have recently used Alexa have found it to be helpful in their daily tasks.

Keywords: artificial intelligence, Echo system, machine learning, feature for feature match

Procedia PDF Downloads 106
652 ADP Approach to Evaluate the Blood Supply Network of Ontario

Authors: Usama Abdulwahab, Mohammed Wahab

Abstract:

This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.

Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem

Procedia PDF Downloads 494
651 A Contribution to Human Activities Recognition Using Expert System Techniques

Authors: Malika Yaici, Soraya Aloui, Sara Semchaoui

Abstract:

This paper deals with human activity recognition from sensor data. It is an active research area, and the main objective is to obtain a high recognition rate. In this work, a recognition system based on expert systems is proposed; the recognition is performed using the objects, object states, and gestures and taking into account the context (the location of the objects and of the person performing the activity, the duration of the elementary actions and the activity). The system recognizes complex activities after decomposing them into simple, easy-to-recognize activities. The proposed method can be applied to any type of activity. The simulation results show the robustness of our system and its speed of decision.

Keywords: human activity recognition, ubiquitous computing, context-awareness, expert system

Procedia PDF Downloads 90
650 Misleading Node Detection and Response Mechanism in Mobile Ad-Hoc Network

Authors: Earleen Jane Fuentes, Regeene Melarese Lim, Franklin Benjamin Tapia, Alexis Pantola

Abstract:

Mobile Ad-hoc Network (MANET) is an infrastructure-less network of mobile devices, also known as nodes. These nodes heavily rely on each other’s resources such as memory, computing power, and energy. Thus, some nodes may become selective in forwarding packets so as to conserve their resources. These nodes are called misleading nodes. Several reputation-based techniques (e.g. CORE, CONFIDANT, LARS, SORI, OCEAN) and acknowledgment-based techniques (e.g. TWOACK, S-TWOACK, EAACK) have been proposed to detect such nodes. These techniques do not appropriately punish misleading nodes. Hence, this paper addresses the limitations of these techniques using a system called MINDRA.

Keywords: acknowledgment-based techniques, mobile ad-hoc network, selfish nodes, reputation-based techniques

Procedia PDF Downloads 369
649 The Determination of the Phosphorous Solubility in the Iron by the Function of the Other Components

Authors: Andras Dezső, Peter Baumli, George Kaptay

Abstract:

The phosphorous is the important components in the steels, because it makes the changing of the mechanical properties and possibly modifying the structure. The phosphorous can be create the Fe3P compounds, what is segregated in the ferrite grain boundary in the intervals of the nano-, or microscale. This intermetallic compound is decreasing the mechanical properties, for example it makes the blue brittleness which means that the brittle created by the segregated particles at 200 ... 300°C. This work describes the phosphide solubility by the other components effect. We make calculations for the Ni, Mo, Cu, S, V, C, Si, Mn, and the Cr elements by the Thermo-Calc software. We predict the effects by approximate functions. The binary Fe-P system has a solubility line, which has a determinating equation. The result is below: lnwo = -3,439 – 1.903/T where the w0 means the weight percent of the maximum soluted concentration of the phosphorous, and the T is the temperature in Kelvin. The equation show that the P more soluble element when the temperature increasing. The nickel, molybdenum, vanadium, silicon, manganese, and the chromium make dependence to the maximum soluted concentration. These functions are more dependent by the elements concentration, which are lower when we put these elements in our steels. The copper, sulphur and carbon do not make effect to the phosphorous solubility. We predict that all of cases the maximum solubility concentration increases when the temperature more and more high. Between 473K and 673 K, in the phase diagram, these systems contain mostly two or three phase eutectoid, and the singe phase, ferritic intervals. In the eutectoid areas the ferrite, the iron-phosphide, and the metal (III)-phospide are in the equilibrium. In these modelling we predicted that which elements are good for avoid the phosphide segregation or not. These datas are important when we make or choose the steels, where the phosphide segregation stopping our possibilities.

Keywords: phosphorous, steel, segregation, thermo-calc software

Procedia PDF Downloads 614
648 Knowledge and Skills Requirements for Software Developer Students

Authors: J. Liebenberg, M. Huisman, E. Mentz

Abstract:

It is widely acknowledged that there is a shortage of software developers, not only in South Africa, but also worldwide. Despite reports on a gap between industry needs and software education, the gap has mostly been explored in quantitative studies. This paper reports on the qualitative data of a mixed method study of the perceptions of professional software developers regarding what topics they learned from their formal education and the importance of these topics to their actual work. The analysis suggests that there is a gap between industry’s needs and software development education and the following recommendations are made: 1) Real-life projects must be included in students’ education; 2) Soft skills and business skills must be included in curricula; 3) Universities must keep the curriculum up to date; 4) Software development education must be made accessible to a diverse range of students.

Keywords: software development education, software industry, IT workforce, computing curricula

Procedia PDF Downloads 451
647 Game-Based Learning in a Higher Education Course: A Case Study with Minecraft Education Edition

Authors: Salvador Antelmo Casanova Valencia

Abstract:

This study documents the use of the Minecraft Education Edition application to explore immersive game-based learning environments. We analyze the contributions of fourth-year university students who are pursuing a degree in Administrative Computing at the Universidad Michoacana de San Nicolas de Hidalgo. In this study, descriptive data and statistical inference are detailed using a quasi-experimental design using the Wilcoxon test. The instruments will provide data validation. Game-based learning in immersive environments necessarily implies greater student participation and commitment, resulting in the study, motivation, and significant improvements, promoting cooperation and autonomous learning.

Keywords: game-based learning, gamification, higher education, Minecraft

Procedia PDF Downloads 153
646 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 151
645 The Right to Data Portability and Its Influence on the Development of Digital Services

Authors: Roman Bieda

Abstract:

The General Data Protection Regulation (GDPR) will come into force on 25 May 2018 which will create a new legal framework for the protection of personal data in the European Union. Article 20 of GDPR introduces a right to data portability. This right allows for data subjects to receive the personal data which they have provided to a data controller, in a structured, commonly used and machine-readable format, and to transmit this data to another data controller. The right to data portability, by facilitating transferring personal data between IT environments (e.g.: applications), will also facilitate changing the provider of services (e.g. changing a bank or a cloud computing service provider). Therefore, it will contribute to the development of competition and the digital market. The aim of this paper is to discuss the right to data portability and its influence on the development of new digital services.

Keywords: data portability, digital market, GDPR, personal data

Procedia PDF Downloads 461
644 Discrimination and Classification of Vestibular Neuritis Using Combined Fisher and Support Vector Machine Model

Authors: Amine Ben Slama, Aymen Mouelhi, Sondes Manoubi, Chiraz Mbarek, Hedi Trabelsi, Mounir Sayadi, Farhat Fnaiech

Abstract:

Vertigo is a sensation of feeling off balance; the cause of this symptom is very difficult to interpret and needs a complementary exam. Generally, vertigo is caused by an ear problem. Some of the most common causes include: benign paroxysmal positional vertigo (BPPV), Meniere's disease and vestibular neuritis (VN). In clinical practice, different tests of videonystagmographic (VNG) technique are used to detect the presence of vestibular neuritis (VN). The topographical diagnosis of this disease presents a large diversity in its characteristics that confirm a mixture of problems for usual etiological analysis methods. In this study, a vestibular neuritis analysis method is proposed with videonystagmography (VNG) applications using an estimation of pupil movements in the case of an uncontrolled motion to obtain an efficient and reliable diagnosis results. First, an estimation of the pupil displacement vectors using with Hough Transform (HT) is performed to approximate the location of pupil region. Then, temporal and frequency features are computed from the rotation angle variation of the pupil motion. Finally, optimized features are selected using Fisher criterion evaluation for discrimination and classification of the VN disease.Experimental results are analyzed using two categories: normal and pathologic. By classifying the reduced features using the Support Vector Machine (SVM), 94% is achieved as classification accuracy. Compared to recent studies, the proposed expert system is extremely helpful and highly effective to resolve the problem of VNG analysis and provide an accurate diagnostic for medical devices.

Keywords: nystagmus, vestibular neuritis, videonystagmographic system, VNG, Fisher criterion, support vector machine, SVM

Procedia PDF Downloads 129
643 Searching k-Nearest Neighbors to be Appropriate under Gaming Environments

Authors: Jae Moon Lee

Abstract:

In general, algorithms to find continuous k-nearest neighbors have been researched on the location based services, monitoring periodically the moving objects such as vehicles and mobile phone. Those researches assume the environment that the number of query points is much less than that of moving objects and the query points are not moved but fixed. In gaming environments, this problem is when computing the next movement considering the neighbors such as flocking, crowd and robot simulations. In this case, every moving object becomes a query point so that the number of query point is same to that of moving objects and the query points are also moving. In this paper, we analyze the performance of the existing algorithms focused on location based services how they operate under gaming environments.

Keywords: flocking behavior, heterogeneous agents, similarity, simulation

Procedia PDF Downloads 285