Search results for: approximate bayesian computation
524 On the Application and Comparison of Two Geostatistics Methods in the Parameterisation Step to Calibrate Groundwater Model: Grid-Based Pilot Point and Head-Zonation Based Pilot Point Methods
Authors: Dua K. S. Y. Klaas, Monzur A. Imteaz, Ika Sudiayem, Elkan M. E. Klaas, Eldav C. M. Klaas
Abstract:
Properly selecting the most suitable and effective geostatistics method in the parameterization step of groundwater modeling is critical to attain a satisfactory model. In this paper, two geostatistics methods, i.e., Grid-Based Pilot Point (GB-PP) and Head-Zonation Based Pilot Point (HZB-PP) methods, were applied in an eogenetic karst catchment and compared using as model performances and computation time the criteria. Overall, the results show that appropriate selection of method is substantial in the parameterization of physically-based groundwater models, as it influences both the accuracy and simulation times. It was found that GB-PP method performed comparably superior to HZB-PP method. However, reflecting its model performances, HZB-PP method is promising for further application in groundwater modeling.Keywords: groundwater model, geostatistics, pilot point, parameterization step
Procedia PDF Downloads 166523 Adopting Cloud-Based Techniques to Reduce Energy Consumption: Toward a Greener Cloud
Authors: Sandesh Achar
Abstract:
The cloud computing industry has set new goals for better service delivery and deployment, so anyone can access services such as computation, application, and storage anytime. Cloud computing promises new possibilities for approaching sustainable solutions to deploy and advance their services in this distributed environment. This work explores energy-efficient approaches and how cloud-based architecture can reduce energy consumption levels amongst enterprises leveraging cloud computing services. Adopting cloud-based networking, database, and server machines provide a comprehensive means of achieving the potential gains in energy efficiency that cloud computing offers. In energy-efficient cloud computing, virtualization is one aspect that can integrate several technologies to achieve consolidation and better resource utilization. Moreover, the Green Cloud Architecture for cloud data centers is discussed in terms of cost, performance, and energy consumption, and appropriate solutions for various application areas are provided.Keywords: greener cloud, cloud computing, energy efficiency, energy consumption, metadata tags, green cloud advisor
Procedia PDF Downloads 86522 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems
Authors: Rodolfo Lorbieski, Silvia Modesto Nassar
Abstract:
Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.Keywords: stacking, multi-layers, ensemble, multi-class
Procedia PDF Downloads 269521 A Study of Population Growth Models and Future Population of India
Authors: Sheena K. J., Jyoti Badge, Sayed Mohammed Zeeshan
Abstract:
A Comparative Study of Exponential and Logistic Population Growth Models in India India is the second most populous city in the world, just behind China, and is going to be in the first place by next year. The Indian population has remarkably at higher rate than the other countries from the past 20 years. There were many scientists and demographers who has formulated various models of population growth in order to study and predict the future population. Some of the models are Fibonacci population growth model, Exponential growth model, Logistic growth model, Lotka-Volterra model, etc. These models have been effective in the past to an extent in predicting the population. However, it is essential to have a detailed comparative study between the population models to come out with a more accurate one. Having said that, this research study helps to analyze and compare the two population models under consideration - exponential and logistic growth models, thereby identifying the most effective one. Using the census data of 2011, the approximate population for 2016 to 2031 are calculated for 20 Indian states using both the models, compared and recorded the data with the actual population. On comparing the results of both models, it is found that logistic population model is more accurate than the exponential model, and using this model, we can predict the future population in a more effective way. This will give an insight to the researchers about the effective models of population and how effective these population models are in predicting the future population.Keywords: population growth, population models, exponential model, logistic model, fibonacci model, lotka-volterra model, future population prediction, demographers
Procedia PDF Downloads 124520 Analysis of Two Methods to Estimation Stochastic Demand in the Vehicle Routing Problem
Authors: Fatemeh Torfi
Abstract:
Estimation of stochastic demand in physical distribution in general and efficient transport routs management in particular is emerging as a crucial factor in urban planning domain. It is particularly important in some municipalities such as Tehran where a sound demand management calls for a realistic analysis of the routing system. The methodology involved critically investigating a fuzzy least-squares linear regression approach (FLLRs) to estimate the stochastic demands in the vehicle routing problem (VRP) bearing in mind the customer's preferences order. A FLLR method is proposed in solving the VRP with stochastic demands. Approximate-distance fuzzy least-squares (ADFL) estimator ADFL estimator is applied to original data taken from a case study. The SSR values of the ADFL estimator and real demand are obtained and then compared to SSR values of the nominal demand and real demand. Empirical results showed that the proposed methods can be viable in solving problems under circumstances of having vague and imprecise performance ratings. The results further proved that application of the ADFL was realistic and efficient estimator to face the stochastic demand challenges in vehicle routing system management and solve relevant problems.Keywords: fuzzy least-squares, stochastic, location, routing problems
Procedia PDF Downloads 434519 Application of Residual Correction Method on Hyperbolic Thermoelastic Response of Hollow Spherical Medium in Rapid Transient Heat Conduction
Authors: Po-Jen Su, Huann-Ming Chou
Abstract:
In this article we uses the residual correction method to deal with transient thermoelastic problems with a hollow spherical region when the continuum medium possesses spherically isotropic thermoelastic properties. Based on linear thermoelastic theory, the equations of hyperbolic heat conduction and thermoelastic motion were combined to establish the thermoelastic dynamic model with consideration of the deformation acceleration effect and non-Fourier effect under the condition of transient thermal shock. The approximate solutions of temperature and displacement distributions are obtained using the residual correction method based on the maximum principle in combination with the finite difference method, making it easier and faster to obtain upper and lower approximations of exact solutions. The proposed method is found to be an effective numerical method with satisfactory accuracy. Moreover, the result shows that the effect of transient thermal shock induced by deformation acceleration is enhanced by non-Fourier heat conduction with increased peak stress. The influence on the stress increases with the thermal relaxation time.Keywords: maximum principle, non-Fourier heat conduction, residual correction method, thermo-elastic response
Procedia PDF Downloads 426518 Digital Homeostasis: Tangible Computing as a Multi-Sensory Installation
Authors: Andrea Macruz
Abstract:
This paper explores computation as a process for design by examining how computers can become more than an operative strategy in a designer's toolkit. It documents this, building upon concepts of neuroscience and Antonio Damasio's Homeostasis Theory, which is the control of bodily states through feedback intended to keep conditions favorable for life. To do this, it follows a methodology through algorithmic drawing and discusses the outcomes of three multi-sensory design installations, which culminated from a course in an academic setting. It explains both the studio process that took place to create the installations and the computational process that was developed, related to the fields of algorithmic design and tangible computing. It discusses how designers can use computational range to achieve homeostasis related to sensory data in a multi-sensory installation. The outcomes show clearly how people and computers interact with different sensory modalities and affordances. They propose using computers as meta-physical stabilizers rather than tools.Keywords: algorithmic drawing, Antonio Damasio, emotion, homeostasis, multi-sensory installation, neuroscience
Procedia PDF Downloads 108517 Breast Cancer Survivability Prediction via Classifier Ensemble
Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia
Abstract:
This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.Keywords: classifier ensemble, breast cancer survivability, data mining, SEER
Procedia PDF Downloads 328516 Probabilistic and Stochastic Analysis of a Retaining Wall for C-Φ Soil Backfill
Authors: André Luís Brasil Cavalcante, Juan Felix Rodriguez Rebolledo, Lucas Parreira de Faria Borges
Abstract:
A methodology for the probabilistic analysis of active earth pressure on retaining wall for c-Φ soil backfill is described in this paper. The Rosenblueth point estimate method is used to measure the failure probability of a gravity retaining wall. The basic principle of this methodology is to use two point estimates, i.e., the standard deviation and the mean value, to examine a variable in the safety analysis. The simplicity of this framework assures to its wide application. For the calculation is required 2ⁿ repetitions during the analysis, since the system is governed by n variables. In this study, a probabilistic model based on the Rosenblueth approach for the computation of the overturning probability of failure of a retaining wall is presented. The obtained results have shown the advantages of this kind of models in comparison with the deterministic solution. In a relatively easy way, the uncertainty on the wall and fill parameters are taken into account, and some practical results can be obtained for the retaining structure design.Keywords: retaining wall, active earth pressure, backfill, probabilistic analysis
Procedia PDF Downloads 418515 A Large Dataset Imputation Approach Applied to Country Conflict Prediction Data
Authors: Benjamin Leiby, Darryl Ahner
Abstract:
This study demonstrates an alternative stochastic imputation approach for large datasets when preferred commercial packages struggle to iterate due to numerical problems. A large country conflict dataset motivates the search to impute missing values well over a common threshold of 20% missingness. The methodology capitalizes on correlation while using model residuals to provide the uncertainty in estimating unknown values. Examination of the methodology provides insight toward choosing linear or nonlinear modeling terms. Static tolerances common in most packages are replaced with tailorable tolerances that exploit residuals to fit each data element. The methodology evaluation includes observing computation time, model fit, and the comparison of known values to replaced values created through imputation. Overall, the country conflict dataset illustrates promise with modeling first-order interactions while presenting a need for further refinement that mimics predictive mean matching.Keywords: correlation, country conflict, imputation, stochastic regression
Procedia PDF Downloads 120514 A Cohort Study of Early Cardiologist Consultation by Telemedicine on the Critical Non-STEMI Inpatients
Authors: Wisit Wichitkosoom
Abstract:
Objectives: To find out the more effect of early cardiologist consultation using a simple technology on the diagnosis and early proper management of patients with Non-STEMI at emergency department of district hospitals without cardiologist on site before transferred. Methods: A cohort study was performed in Udonthani general hospital at Udonthani province. From 1 October 2012–30 September 2013 with 892 patients diagnosed with Non-STEMI. All patients mean aged 46.8 years of age who had been transferred because of Non-STEMI diagnosed, over a 12 week period of studied. Patients whose transferred, in addition to receiving proper care, were offered a cardiologist consultation with average time to Udonthani hospital 1.5 hour. The main outcome measure was length of hospital stay, mortality at 3 months, inpatient investigation, and transfer rate to the higher facilitated hospital were also studied. Results: Hospital stay was significantly shorter for those didn’t consult cardiologist (hazard ratio 1.19; approximate 95% CI 1.001 to 1.251; p = 0.039). The 136 cases were transferred to higher facilitated hospital. No statistically significant in overall mortality between the groups (p=0.068). Conclusions: Early cardiologist consultant can reduce length of hospital stay for patients with cardiovascular conditions outside of cardiac center. The new basic technology can apply for the safety patient.Keywords: critical, telemedicine, safety, non STEMI
Procedia PDF Downloads 418513 Location Management in Wireless Sensor Networks with Mobility
Authors: Amrita Anil Agashe, Sumant Tapas, Ajay Verma Yogesh Sonavane, Sourabh Yeravar
Abstract:
Due to advancement in MEMS technology today wireless sensors network has gained a lot of importance. The wide range of its applications includes environmental and habitat monitoring, object localization, target tracking, security surveillance etc. Wireless sensor networks consist of tiny sensor devices called as motes. The constrained computation power, battery power, storage capacity and communication bandwidth of the tiny motes pose challenging problems in the design and deployment of such systems. In this paper, we propose a ubiquitous framework for Real-Time Tracking, Sensing and Management System using IITH motes. Also, we explain the algorithm that we have developed for location management in wireless sensor networks with the aspect of mobility. Our developed framework and algorithm can be used to detect emergency events and safety threats and provides warning signals to handle the emergency.Keywords: mobility management, motes, multihop, wireless sensor networks
Procedia PDF Downloads 418512 Analytical Downlink Effective SINR Evaluation in LTE Networks
Authors: Marwane Ben Hcine, Ridha Bouallegue
Abstract:
The aim of this work is to provide an original analytical framework for downlink effective SINR evaluation in LTE networks. The classical single carrier SINR performance evaluation is extended to multi-carrier systems operating over frequency selective channels. Extension is achieved by expressing the link outage probability in terms of the statistics of the effective SINR. For effective SINR computation, the exponential effective SINR mapping (EESM) method is used on this work. Closed-form expression for the link outage probability is achieved assuming a log skew normal approximation for single carrier case. Then we rely on the lognormal approximation to express the exponential effective SINR distribution as a function of the mean and standard deviation of the SINR of a generic subcarrier. Achieved formulas is easily computable and can be obtained for a user equipment (UE) located at any distance from its serving eNodeB. Simulations show that the proposed framework provides results with accuracy within 0.5 dB.Keywords: LTE, OFDMA, effective SINR, log skew normal approximation
Procedia PDF Downloads 365511 Non-Interactive XOR Quantum Oblivious Transfer: Optimal Protocols and Their Experimental Implementations
Authors: Lara Stroh, Nikola Horová, Robert Stárek, Ittoop V. Puthoor, Michal Mičuda, Miloslav Dušek, Erika Andersson
Abstract:
Oblivious transfer (OT) is an important cryptographic primitive. Any multi-party computation can be realised with OT as a building block. XOR oblivious transfer (XOT) is a variant where the sender Alice has two bits, and a receiver, Bob, obtains either the first bit, the second bit, or their XOR. Bob should not learn anything more than this, and Alice should not learn what Bob has learned. Perfect quantum OT with information-theoretic security is known to be impossible. We determine the smallest possible cheating probabilities for unrestricted dishonest parties in non-interactive quantum XOT protocols using symmetric pure states and present an optimal protocol which outperforms classical protocols. We also "reverse" this protocol so that Bob becomes the sender of a quantum state and Alice the receiver who measures it while still implementing oblivious transfer from Alice to Bob. Cheating probabilities for both parties stay the same as for the unreversed protocol. We optically implemented both the unreversed and the reversed protocols and cheating strategies, noting that the reversed protocol is easier to implement.Keywords: oblivious transfer, quantum protocol, cryptography, XOR
Procedia PDF Downloads 126510 The Failure and Energy Mechanism of Rock-Like Material with Single Flaw
Authors: Yu Chen
Abstract:
This paper investigates the influence of flaw on failure process of rock-like material under uniaxial compression. In laboratory, the uniaxial compression tests of intact specimens and a series of specimens within single flaw were conducted. The inclination angle of flaws includes 0°, 15°, 30°, 45°, 60°, 75° and 90°. Based on the laboratory tests, the corresponding models of numerical simulation were built and loaded in PFC2D. After analysing the crack initiation and failure modes, deformation field, and energy mechanism for both laboratory tests and numerical simulation, it can be concluded that the influence of flaws on the failure process is determined by its inclination. The characteristic stresses increase as flaw angle rising basically. The tensile cracks develop from gentle flaws (α ≤ 30°) and the shear cracks develop from other flaws. The propagation of cracks changes during failure process and the failure mode of a specimen corresponds to the orientation of the flaw. A flaw has significant influence on the transverse deformation field at the middle of the specimen, except the 75° and 90° flaw sample. The input energy, strain energy and dissipation energy of specimens show approximate increase trends with flaw angle rising and it presents large difference on the energy distribution.Keywords: failure pattern, particle deformation field, energy mechanism, PFC
Procedia PDF Downloads 213509 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data
Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin
Abstract:
The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test
Procedia PDF Downloads 299508 General Time-Dependent Sequenced Route Queries in Road Networks
Authors: Mohammad Hossein Ahmadi, Vahid Haghighatdoost
Abstract:
Spatial databases have been an active area of research over years. In this paper, we study how to answer the General Time-Dependent Sequenced Route queries. Given the origin and destination of a user over a time-dependent road network graph, an ordered list of categories of interests and a departure time interval, our goal is to find the minimum travel time path along with the best departure time that minimizes the total travel time from the source location to the given destination passing through a sequence of points of interests belonging to each of the specified categories of interest. The challenge of this problem is the added complexity to the optimal sequenced route queries, where we assume that first the road network is time dependent, and secondly the user defines a departure time interval instead of one single departure time instance. For processing general time-dependent sequenced route queries, we propose two solutions as Discrete-Time and Continuous-Time Sequenced Route approaches, finding approximate and exact solutions, respectively. Our proposed approaches traverse the road network based on A*-search paradigm equipped with an efficient heuristic function, for shrinking the search space. Extensive experiments are conducted to verify the efficiency of our proposed approaches.Keywords: trip planning, time dependent, sequenced route query, road networks
Procedia PDF Downloads 321507 Information Communication Technology Based Road Traffic Accidents’ Identification, and Related Smart Solution Utilizing Big Data
Authors: Ghulam Haider Haidaree, Nsenda Lukumwena
Abstract:
Today the world of research enjoys abundant data, available in virtually any field, technology, science, and business, politics, etc. This is commonly referred to as big data. This offers a great deal of precision and accuracy, supportive of an in-depth look at any decision-making process. When and if well used, Big Data affords its users with the opportunity to produce substantially well supported and good results. This paper leans extensively on big data to investigate possible smart solutions to urban mobility and related issues, namely road traffic accidents, its casualties, and fatalities based on multiple factors, including age, gender, location occurrences of accidents, etc. Multiple technologies were used in combination to produce an Information Communication Technology (ICT) based solution with embedded technology. Those technologies include principally Geographic Information System (GIS), Orange Data Mining Software, Bayesian Statistics, to name a few. The study uses the Leeds accident 2016 to illustrate the thinking process and extracts thereof a model that can be tested, evaluated, and replicated. The authors optimistically believe that the proposed model will significantly and smartly help to flatten the curve of road traffic accidents in the fast-growing population densities, which increases considerably motor-based mobility.Keywords: accident factors, geographic information system, information communication technology, mobility
Procedia PDF Downloads 208506 Advanced Mechatronic Design of Robot Manipulator Using Hardware-In-The-Loop Simulation
Authors: Reza Karami, Ali Akbar Ebrahimi
Abstract:
This paper discusses concurrent engineering of robot manipulators, based on the Holistic Concurrent Design (HCD) methodology and by using a hardware-in-the-loop simulation platform. The methodology allows for considering numerous design variables with different natures concurrently. It redefines the ultimate goal of design based on the notion of satisfaction, resulting in the simplification of the multi-objective constrained optimization process. It also formalizes the effect of designer’s subjective attitude in the process. To enhance modeling efficiency for both computation and accuracy, a hardware-in-the-loop simulation platform is used, which involves physical joint modules and the control unit in addition to the software modules. This platform is implemented in the HCD design architecture to reliably evaluate the design attributes and performance super criterion during the design process. The resulting overall architecture is applied to redesigning kinematic, dynamic and control parameters of an industrial robot manipulator.Keywords: concurrent engineering, hardware-in-the-loop simulation, robot manipulator, multidisciplinary systems, mechatronics
Procedia PDF Downloads 454505 HPPDFIM-HD: Transaction Distortion and Connected Perturbation Approach for Hierarchical Privacy Preserving Distributed Frequent Itemset Mining over Horizontally-Partitioned Dataset
Authors: Fuad Ali Mohammed Al-Yarimi
Abstract:
Many algorithms have been proposed to provide privacy preserving in data mining. These protocols are based on two main approaches named as: the perturbation approach and the Cryptographic approach. The first one is based on perturbation of the valuable information while the second one uses cryptographic techniques. The perturbation approach is much more efficient with reduced accuracy while the cryptographic approach can provide solutions with perfect accuracy. However, the cryptographic approach is a much slower method and requires considerable computation and communication overhead. In this paper, a new scalable protocol is proposed which combines the advantages of the perturbation and distortion along with cryptographic approach to perform privacy preserving in distributed frequent itemset mining on horizontally distributed data. Both the privacy and performance characteristics of the proposed protocol are studied empirically.Keywords: anonymity data, data mining, distributed frequent itemset mining, gaussian perturbation, perturbation approach, privacy preserving data mining
Procedia PDF Downloads 505504 Optimality of Shapley Value Mechanism under Sybil Strategies
Authors: Bruno Mazorra Roig
Abstract:
In the realm of cost-sharing mechanisms, the vulnerability to Sybil strategies, where agents can create fake identities to manipulate outcomes, has not yet been studied. In this paper, we delve into the intricacies of different cost-sharing mechanisms proposed in the literature, highlighting its non-Sybil-resistance nature. Furthermore, we prove that under mild conditions, a Sybil-proof cost-sharing mechanism for public excludable goods is at least (n/2 + 1)−approximate. This finding reveals an exponential increase in the worst-case social cost in environments where agents are restricted from using Sybil strategies. We introduce the concept of Sybil Welfare Invariant mechanisms, where a mechanism maintains its worst-case welfare under Sybil strategies for every set of prior beliefs with full support even when the mechanism is not Sybil-proof. Finally, we prove that the Shapley value mechanism for public excludable goods holds this property and so deduce that the worst-case social cost of this mechanism is the nth harmonic number Hn under the equilibrium of the game with Sybil strategies, matching the worst-case social cost bound for cost-sharing mechanisms. This finding carries important implications for decentralized autonomous organizations (DAOs), indicating that they are capable of funding public excludable goods efficiently, even when the total number of agents is unknown.Keywords: game theory, mechanism design, cost sharing, false-name proofness
Procedia PDF Downloads 64503 Design Of High Sensitivity Transceiver for WSN
Authors: A. Anitha, M. Aishwariya
Abstract:
The realization of truly ubiquitous wireless sensor networks (WSN) demands Ultra-low power wireless communication capability. Because the radio transceiver in a wireless sensor node consumes more power when compared to the computation part it is necessary to reduce the power consumption. Hence, a low power transceiver is designed and implemented in a 120 nm CMOS technology for wireless sensor nodes. The power consumption of the transceiver is reduced still by maintaining the sensitivity. The transceiver designed combines the blocks including differential oscillator, mixer, envelope detector, power amplifiers, and LNA. RF signal modulation and demodulation is carried by On-Off keying method at 2.4 GHz which is said as ISM band. The transmitter demonstrates an output power of 2.075 mW while consuming a supply voltage of range 1.2 V-5.0 V. Here the comparison of LNA and power amplifier is done to obtain an amplifier which produces a high gain of 1.608 dB at receiver which is suitable to produce a desired sensitivity. The multistage RF amplifier is used to improve the gain at the receiver side. The power dissipation of the circuit is in the range of 0.183-0.323 mW. The receiver achieves a sensitivity of about -95 dBm with data rate of 1 Mbps.Keywords: CMOS, envelope detector, ISM band, LNA, low power electronics, PA, wireless transceiver
Procedia PDF Downloads 519502 Deep Reinforcement Learning Model Using Parameterised Quantum Circuits
Authors: Lokes Parvatha Kumaran S., Sakthi Jay Mahenthar C., Sathyaprakash P., Jayakumar V., Shobanadevi A.
Abstract:
With the evolution of technology, the need to solve complex computational problems like machine learning and deep learning has shot up. But even the most powerful classical supercomputers find it difficult to execute these tasks. With the recent development of quantum computing, researchers and tech-giants strive for new quantum circuits for machine learning tasks, as present works on Quantum Machine Learning (QML) ensure less memory consumption and reduced model parameters. But it is strenuous to simulate classical deep learning models on existing quantum computing platforms due to the inflexibility of deep quantum circuits. As a consequence, it is essential to design viable quantum algorithms for QML for noisy intermediate-scale quantum (NISQ) devices. The proposed work aims to explore Variational Quantum Circuits (VQC) for Deep Reinforcement Learning by remodeling the experience replay and target network into a representation of VQC. In addition, to reduce the number of model parameters, quantum information encoding schemes are used to achieve better results than the classical neural networks. VQCs are employed to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and the target network.Keywords: quantum computing, quantum machine learning, variational quantum circuit, deep reinforcement learning, quantum information encoding scheme
Procedia PDF Downloads 134501 Network Connectivity Knowledge Graph Using Dwave Quantum Hybrid Solvers
Authors: Nivedha Rajaram
Abstract:
Hybrid Quantum solvers have been given prime focus in recent days by computation problem-solving domain industrial applications. D’Wave Quantum Computers are one such paragon of systems built using quantum annealing mechanism. Discrete Quadratic Models is a hybrid quantum computing model class supplied by D’Wave Ocean SDK - a real-time software platform for hybrid quantum solvers. These hybrid quantum computing modellers can be employed to solve classic problems. One such problem that we consider in this paper is finding a network connectivity knowledge hub in a huge network of systems. Using this quantum solver, we try to find out the prime system hub, which acts as a supreme connection point for the set of connected computers in a large network. This paper establishes an innovative problem approach to generate a connectivity system hub plot for a set of systems using DWave ocean SDK hybrid quantum solvers.Keywords: quantum computing, hybrid quantum solver, DWave annealing, network knowledge graph
Procedia PDF Downloads 127500 Solution of Singularly Perturbed Differential Difference Equations Using Liouville Green Transformation
Authors: Y. N. Reddy
Abstract:
The class of differential-difference equations which have characteristics of both classes, i.e., delay/advance and singularly perturbed behaviour is known as singularly perturbed differential-difference equations. The expression ‘positive shift’ and ‘negative shift’ are also used for ‘advance’ and ‘delay’ respectively. In general, an ordinary differential equation in which the highest order derivative is multiplied by a small positive parameter and containing at least one delay/advance is known as singularly perturbed differential-difference equation. Singularly perturbed differential-difference equations arise in the modelling of various practical phenomena in bioscience, engineering, control theory, specifically in variational problems, in describing the human pupil-light reflex, in a variety of models for physiological processes or diseases and first exit time problems in the modelling of the determination of expected time for the generation of action potential in nerve cells by random synaptic inputs in dendrites. In this paper, we envisage the use of Liouville Green Transformation to find the solution of singularly perturbed differential difference equations. First, using Taylor series, the given singularly perturbed differential difference equation is approximated by an asymptotically equivalent singularly perturbation problem. Then the Liouville Green Transformation is applied to get the solution. Several model examples are solved, and the results are compared with other methods. It is observed that the present method gives better approximate solutions.Keywords: difference equations, differential equations, singular perturbations, boundary layer
Procedia PDF Downloads 199499 Performance of Neural Networks vs. Radial Basis Functions When Forming a Metamodel for Residential Buildings
Authors: Philip Symonds, Jon Taylor, Zaid Chalabi, Michael Davies
Abstract:
With the world climate projected to warm and major cities in developing countries becoming increasingly populated and polluted, governments are tasked with the problem of overheating and air quality in residential buildings. This paper presents the development of an adaptable model of these risks. Simulations are performed using the EnergyPlus building physics software. An accurate metamodel is formed by randomly sampling building input parameters and training on the outputs of EnergyPlus simulations. Metamodels are used to vastly reduce the amount of computation time required when performing optimisation and sensitivity analyses. Neural Networks (NNs) are compared to a Radial Basis Function (RBF) algorithm when forming a metamodel. These techniques were implemented using the PyBrain and scikit-learn python libraries, respectively. NNs are shown to perform around 15% better than RBFs when estimating overheating and air pollution metrics modelled by EnergyPlus.Keywords: neural networks, radial basis functions, metamodelling, python machine learning libraries
Procedia PDF Downloads 447498 Measuring Text-Based Semantics Relatedness Using WordNet
Authors: Madiha Khan, Sidrah Ramzan, Seemab Khan, Shahzad Hassan, Kamran Saeed
Abstract:
Measuring semantic similarity between texts is calculating semantic relatedness between texts using various techniques. Our web application (Measuring Relatedness of Concepts-MRC) allows user to input two text corpuses and get semantic similarity percentage between both using WordNet. Our application goes through five stages for the computation of semantic relatedness. Those stages are: Preprocessing (extracts keywords from content), Feature Extraction (classification of words into Parts-of-Speech), Synonyms Extraction (retrieves synonyms against each keyword), Measuring Similarity (using keywords and synonyms, similarity is measured) and Visualization (graphical representation of similarity measure). Hence the user can measure similarity on basis of features as well. The end result is a percentage score and the word(s) which form the basis of similarity between both texts with use of different tools on same platform. In future work we look forward for a Web as a live corpus application that provides a simpler and user friendly tool to compare documents and extract useful information.Keywords: Graphviz representation, semantic relatedness, similarity measurement, WordNet similarity
Procedia PDF Downloads 238497 Comparative Study of Wear and Friction Behavior of Tricalcium Phosphate-Fluorapatite Bioceramic
Authors: Rym Taktak, Achwek Elghazel, Jamel Bouaziz
Abstract:
In the present work, we explored the potential of tribological behavior of tricalcium phosphate-Fluorapatite (β Tcp-Fap) bioceramic which has attracted considerable attention for orthopedics and dental applications. The approximate representatives Fap-βTcp were respectively [{13.26 wt%, 86.74 wt%} {19.9 wt%, 80.1 wt%},{ 26.52 wt%, 73.48 wt%}, {33.16 wt%, 66.84 wt%} and {40 wt%, 60 wt%}. The effects of Fluorapatite additives on friction and wear behavior were studied and discussed. The wear test was conducted using pion-disk tribometer at room temperature under dry condition using a constant sliding speed of 0,063 m/s, and three loads 3, 5 and 8 N. The wear rate and friction coefficient of β Tcp with different additive amounts were compared. An Alumina ball specimens were used as the pin and flat surface β Tcp-Fap specimens as the antagonist counterface. The results show a huge difference between the wear rate of β TCP samples and the other β TCP-Fap composites for all normal forces applied. This result shows the beneficial effect of fluorapatite on the tribological behavior of the β TCP. Moreover, we note that β Tcp-26% Fap specimens exhibit, under dry condition, the lower friction coefficient and the smaller wear rate than other biocomposites. Thereby, the friction and wear behavior is influenced by the addition of fluorapatite, the applied normal force, and the sliding velocity. To extend the understanding of the wear process, the surface topography of β Tcp-26% Fap specimens and the wear track obtained during the wear tests were studied using a surface profilometer, optical microscopy, and scanning electron microscopy.Keywords: alumina, bioceramic, friction and wear test, tricalcium phosphate
Procedia PDF Downloads 234496 Improving the Security of Internet of Things Using Encryption Algorithms
Authors: Amirhossein Safi
Abstract:
Internet of things (IOT) is a kind of advanced information technology which has drawn societies’ attention. Sensors and stimulators are usually recognized as smart devices of our environment. Simultaneously, IOT security brings up new issues. Internet connection and possibility of interaction with smart devices cause those devices to involve more in human life. Therefore, safety is a fundamental requirement in designing IOT. IOT has three remarkable features: overall perception, reliable transmission, and intelligent processing. Because of IOT span, security of conveying data is an essential factor for system security. Hybrid encryption technique is a new model that can be used in IOT. This type of encryption generates strong security and low computation. In this paper, we have proposed a hybrid encryption algorithm which has been conducted in order to reduce safety risks and enhancing encryption's speed and less computational complexity. The purpose of this hybrid algorithm is information integrity, confidentiality, non-repudiation in data exchange for IOT. Eventually, the suggested encryption algorithm has been simulated by MATLAB software, and its speed and safety efficiency were evaluated in comparison with conventional encryption algorithm.Keywords: internet of things, security, hybrid algorithm, privacy
Procedia PDF Downloads 468495 Taylor’s Law and Relationship between Life Expectancy at Birth and Variance in Age at Death in Period Life Table
Authors: David A. Swanson, Lucky M. Tedrow
Abstract:
Taylor’s Law is a widely observed empirical pattern that relates variances to means in sets of non-negative measurements via an approximate power function, which has found application to human mortality. This study adds to this research by showing that Taylor’s Law leads to a model that reasonably describes the relationship between life expectancy at birth (e0, which also is equal to mean age at death in a life table) and variance at age of death in seven World Bank regional life tables measured at two points in time, 1970 and 2000. Using as a benchmark a non-random sample of four Japanese female life tables covering the period from 1950 to 2004, the study finds that the simple linear model provides reasonably accurate estimates of variance in age at death in a life table from e0, where the latter range from 60.9 to 85.59 years. Employing 2017 life tables from the Human Mortality Database, the simple linear model is used to provide estimates of variance at age in death for six countries, three of which have high e0 values and three of which have lower e0 values. The paper provides a substantive interpretation of Taylor’s Law relative to e0 and concludes by arguing that reasonably accurate estimates of variance in age at death in a period life table can be calculated using this approach, which also can be used where e0 itself is estimated rather than generated through the construction of a life table, a useful feature of the model.Keywords: empirical pattern, mean age at death in a life table, mean age of a stationary population, stationary population
Procedia PDF Downloads 330