Search results for: approximate computing
792 Feature Extraction and Impact Analysis for Solid Mechanics Using Supervised Finite Element Analysis
Authors: Edward Schwalb, Matthias Dehmer, Michael Schlenkrich, Farzaneh Taslimi, Ketron Mitchell-Wynne, Horen Kuecuekyan
Abstract:
We present a generalized feature extraction approach for supporting Machine Learning (ML) algorithms which perform tasks similar to Finite-Element Analysis (FEA). We report results for estimating the Head Injury Categorization (HIC) of vehicle engine compartments across various impact scenarios. Our experiments demonstrate that models learned using features derived with a simple discretization approach provide a reasonable approximation of a full simulation. We observe that Decision Trees could be as effective as Neural Networks for the HIC task. The simplicity and performance of the learned Decision Trees could offer a trade-off of a multiple order of magnitude increase in speed and cost improvement over full simulation for a reasonable approximation. When used as a complement to full simulation, the approach enables rapid approximate feedback to engineering teams before submission for full analysis. The approach produces mesh independent features and is further agnostic of the assembly structure.Keywords: mechanical design validation, FEA, supervised decision tree, convolutional neural network.
Procedia PDF Downloads 139791 Increasing Employee Productivity and Work Well-Being by Employing Affective Decision Support and a Knowledge-Based System
Authors: Loreta Kaklauskiene, Arturas Kaklauskas
Abstract:
This employee productivity and work well-being effective system aims to maximise the work performance of personnel and boost well-being in offices. Affective computing, decision support, and knowledge-based systems were used in our research. The basis of this effective system is our European Patent application (No: EP 4 020 134 A1) and two Lithuanian patents (LT 6841, LT 6866). Our study examines ways to support efficient employee productivity and well-being by employing mass-customised, personalised office environment. Efficient employee performance and well-being are managed by changing mass-customised office environment factors such as air pollution levels, humidity, temperature, data, information, knowledge, activities, lighting colours and intensity, scents, media, games, videos, music, and vibrations. These aspects of management generate a customised, adaptive environment for users taking into account their emotional, affective, and physiological (MAP) states measured and fed into the system. This research aims to develop an innovative method and system which would analyse, customise and manage a personalised office environment according to a specific user’s MAP states in a cohesive manner. Various values of work spaces (e.g., employee utilitarian, hedonic, perceived values) are also established throughout this process, based on the measurements that describe MAP states and other aspects related to the office environment. The main contribution of our research is the development of a real-time mass-customised office environment to boost employee performance and well-being. Acknowledgment: This work was supported by Project No. 2020-1-LT01-KA203-078100 “Minimizing the influence of coronavirus in a built environment” (MICROBE) from the European Union’s Erasmus + program.Keywords: effective decision support and a knowledge-based system, human resource management, employee productivity and work well-being, affective computing
Procedia PDF Downloads 110790 Fuzzy Population-Based Meta-Heuristic Approaches for Attribute Reduction in Rough Set Theory
Authors: Mafarja Majdi, Salwani Abdullah, Najmeh S. Jaddi
Abstract:
One of the global combinatorial optimization problems in machine learning is feature selection. It concerned with removing the irrelevant, noisy, and redundant data, along with keeping the original meaning of the original data. Attribute reduction in rough set theory is an important feature selection method. Since attribute reduction is an NP-hard problem, it is necessary to investigate fast and effective approximate algorithms. In this paper, we proposed two feature selection mechanisms based on memetic algorithms (MAs) which combine the genetic algorithm with a fuzzy record to record travel algorithm and a fuzzy controlled great deluge algorithm to identify a good balance between local search and genetic search. In order to verify the proposed approaches, numerical experiments are carried out on thirteen datasets. The results show that the MAs approaches are efficient in solving attribute reduction problems when compared with other meta-heuristic approaches.Keywords: rough set theory, attribute reduction, fuzzy logic, memetic algorithms, record to record algorithm, great deluge algorithm
Procedia PDF Downloads 454789 Design and Development of an Application for the Evaluation of Personal Injury and Disability in Occupational and Forensic Medicine
Authors: Daniel Suárez, Jesús Tomas, Sandra Sendra, Sandra Viciano-Tudela, Luis Felipe Calle, Javier Urios, Jaime Lloret
Abstract:
Our study is to develop a tool for the mobile phone to an assessment of body damage or determination of the degree of disability. This is a field of action of legal medicine and insurance with obvious economic implications. Those people who have suffered an accident or bodily harm demand a quantification of it. The assessment of bodily harm or disability by the expert medical professional is not exempt from complexity. Sometimes it is difficult to quantify pain; other times, the doctor faces simulators or exaggerators, and on many occasions, it is difficult to remember the extensive tables of scales whose details are complex to remember and apply. We present a tool, as a mobile application, that allows entering the sociodemographic date of the patient as well as the characteristics of the accident suffered by the person. With these preliminary data and introducing bodily damage, an approximate calculation of the compensation that the injured party should receive can be made. One of the results of this study is that it allows calculating joint mobility angles without the need to use a goniometer.Keywords: mobile tool, body damage, personal injury and disability, telemedicine
Procedia PDF Downloads 89788 Evaluation of Cast-in-Situ Pile Condition Using Pile Integrity Test
Authors: Mohammad I. Hossain, Omar F. Hamim
Abstract:
This paper presents a case study on a pile integrity test for assessing the integrity of piles as well as a physical dimension (e.g., cross-sectional area, length), continuity, and consistency of the pile materials. The recent boom in the socio-economic condition of Bangladesh has given rise to the building of high-rise commercial and residential infrastructures. The advantage of the pile integrity test lies in the fact that it is possible to get an approximate indication regarding the quality of the sub-structure before commencing the construction of the super-structure. This paper aims at providing a classification of cast-in-situ piles based on characteristic reflectograms obtained using the Sonic Integrity Testing program for the sub-soil condition of Narayanganj, Bangladesh. The piles have been classified as 'Pile Type-1', 'Pile Type-2', 'Pile Type-3', 'Pile type-4', 'Pile Type-5' or 'Pile Type-6' from the visual observations of reflections from the generated stress waves by striking the pile head with a handheld hammer. With respect to construction quality and integrity, piles have been further classified into three distinct categories, i.e., satisfactory, may be satisfactory, and unsatisfactory.Keywords: cast-in-situ piles, characteristic reflectograms, pile integrity test, sonic integrity testing program
Procedia PDF Downloads 117787 Bilateral Thalamic Hypodense Lesions in Computing Tomography
Authors: Angelis P. Barlampas
Abstract:
Purpose of Learning Objective: This case depicts the need for cooperation between the emergency department and the radiologist to achieve the best diagnostic result for the patient. The clinical picture must correlate well with the radiology report and when it does not, this is not necessarily someone’s fault. Careful interpretation and good knowledge of the limitations, advantages and disadvantages of each imaging procedure are essential for the final diagnostic goal. Methods or Background: A patient was brought to the emergency department by their relatives. He was suddenly confused and his mental status was altered. He hadn't any history of mental illness and was otherwise healthy. A computing tomography scan without contrast was done, but it was unremarkable. Because of high clinical suspicion of probable neurologic disease, he was admitted to the hospital. Results or Findings: Another T was done after 48 hours. It showed a hypodense region in both thalamic areas. Taking into account that the first CT was normal, but the initial clinical picture of the patient was alerting of something wrong, the repetitive CT exam is highly suggestive of a probable diagnosis of bilateral thalamic infractions. Differential diagnosis: Primary bilateral thalamic glioma, Wernicke encephalopathy, osmotic myelinolysis, Fabry disease, Wilson disease, Leigh disease, West Nile encephalitis, Greutzfeldt Jacob disease, top of the basilar syndrome, deep venous thrombosis, mild to moderate cerebral hypotension, posterior reversible encephalopathy syndrome, Neurofibromatosis type 1. Conclusion: As is the case of limitations for any imaging procedure, the same applies to CT. The acute ischemic attack can not depict on CT. A period of 24 to 48 hours has to elapse before any abnormality can be seen. So, despite the fact that there are no obvious findings of an ischemic episode, like paresis or imiparesis, one must be careful not to attribute the patient’s clinical signs to other conditions, such as toxic effects, metabolic disorders, psychiatric symptoms, etc. Further investigation with MRI or at least a repeated CT must be done.Keywords: CNS, CT, thalamus, emergency department
Procedia PDF Downloads 121786 A New Fuzzy Fractional Order Model of Transmission of Covid-19 With Quarantine Class
Authors: Asma Hanif, A. I. K. Butt, Shabir Ahmad, Rahim Ud Din, Mustafa Inc
Abstract:
This paper is devoted to a study of the fuzzy fractional mathematical model reviewing the transmission dynamics of the infectious disease Covid-19. The proposed dynamical model consists of susceptible, exposed, symptomatic, asymptomatic, quarantine, hospitalized and recovered compartments. In this study, we deal with the fuzzy fractional model defined in Caputo’s sense. We show the positivity of state variables that all the state variables that represent different compartments of the model are positive. Using Gronwall inequality, we show that the solution of the model is bounded. Using the notion of the next-generation matrix, we find the basic reproduction number of the model. We demonstrate the local and global stability of the equilibrium point by using the concept of Castillo-Chavez and Lyapunov theory with the Lasalle invariant principle, respectively. We present the results that reveal the existence and uniqueness of the solution of the considered model through the fixed point theorem of Schauder and Banach. Using the fuzzy hybrid Laplace method, we acquire the approximate solution of the proposed model. The results are graphically presented via MATLAB-17.Keywords: Caputo fractional derivative, existence and uniqueness, gronwall inequality, Lyapunov theory
Procedia PDF Downloads 105785 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks
Authors: Ahmed Abdullah Ahmed
Abstract:
The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments
Procedia PDF Downloads 512784 Application of Decline Curve Analysis to Depleted Wells in a Cluster and then Predicting the Performance of Currently Flowing Wells
Authors: Satish Kumar Pappu
Abstract:
The most common questions which are frequently asked in oil and gas industry are how much is the current production rate from a particular well and what is the approximate predicted life of that well. These questions can be answered through forecasting of important realistic data like flowing tubing hole pressures FTHP, Production decline curves which are used predict the future performance of a well in a reservoir. With the advent of directional drilling, cluster well drilling has gained much importance and in-fact has even revolutionized the whole world of oil and gas industry. An oil or gas reservoir can generally be described as a collection of several overlying, producing and potentially producing sands in to which a number of wells are drilled depending upon the in-place volume and several other important factors both technical and economical in nature, in some sands only one well is drilled and in some, more than one. The aim of this study is to derive important information from the data collected over a period of time at regular intervals on a depleted well in a reservoir sand and apply this information to predict the performance of other wells in that reservoir sand. The depleted wells are the most common observations when an oil or gas field is being visited, w the application of this study more realistic in nature.Keywords: decline curve analysis, estimation of future gas reserves, reservoir sands, reservoir risk profile
Procedia PDF Downloads 437783 Validating Thermal Performance of Existing Wall Assemblies Using In-Situ Measurements
Authors: Shibei Huang
Abstract:
In deep energy retrofits, the thermal performance of existing building envelopes is often difficult to determine with a high level of accuracy. For older buildings, the records of existing assemblies are often incomplete or inaccurate. To obtain greater baseline performance accuracy for energy models, in-field measurement tools can be used to obtain data on the thermal performance of the existing assemblies. For a known assembly, these field measurements assist in validating the U-factor estimates. If the field-measured U-factor consistently varies from the calculated prediction, those measurements prompt further study. For an unknown assembly, successful field measurements can provide approximate U-factor evaluation, validate assumptions, or identify anomalies requiring further investigation. Using case studies, this presentation will focus on the non-destructive methods utilizing a set of various field tools to validate the baseline U-factors for a range of existing buildings with various wall assemblies. The lessons learned cover what can be achieved, the limitations of these approaches and tools, and ideas for improving the validity of measurements. Key factors include the weather conditions, the interior conditions, the thermal mass of the measured assemblies, and the thermal profiles of the assemblies in question.Keywords: existing building, sensor, thermal analysis, retrofit
Procedia PDF Downloads 63782 Mixed Integer Programing for Multi-Tier Rebate with Discontinuous Cost Function
Authors: Y. Long, L. Liu, K. V. Branin
Abstract:
One challenge faced by procurement decision-maker during the acquisition process is how to compare similar products from different suppliers and allocate orders among different products or services. This work focuses on allocating orders among multiple suppliers considering rebate. The objective function is to minimize the total acquisition cost including purchasing cost and rebate benefit. Rebate benefit is complex and difficult to estimate at the ordering step. Rebate rules vary for different suppliers and usually change over time. In this work, we developed a system to collect the rebate policies, standardized the rebate policies and developed two-stage optimization models for ordering allocation. Rebate policy with multi-tiers is considered in modeling. The discontinuous cost function of rebate benefit is formulated for different scenarios. A piecewise linear function is used to approximate the discontinuous cost function of rebate benefit. And a Mixed Integer Programing (MIP) model is built for order allocation problem with multi-tier rebate. A case study is presented and it shows that our optimization model can reduce the total acquisition cost by considering rebate rules.Keywords: discontinuous cost function, mixed integer programming, optimization, procurement, rebate
Procedia PDF Downloads 259781 On Adaptive and Auto-Configurable Apps
Authors: Prisa Damrongsiri, Kittinan Pongpianskul, Mario Kubek, Herwig Unger
Abstract:
Apps are today the most important possibility to adapt mobile phones and computers to fulfill the special needs of their users. Location- and context-sensitive programs are hereby the key to support the interaction of the user with his/her environment and also to avoid an overload with a plenty of dispensable information. The contribution shows, how a trusted, secure and really bi-directional communication and interaction among users and their environment can be established and used, e.g. in the field of home automation.Keywords: apps, context-sensitive, location-sensitive, self-configuration, mobile computing, smart home
Procedia PDF Downloads 396780 The Role of Mass Sport Guidance in the Health Service Industry of China
Authors: Qiu Jian-Rong, Li Qing-Hui, Zhan Dong, Zhang Lei
Abstract:
Facing the problem of the demand of economic restructuring and risk of social economy stagnation due to the ageing of population, the Health Service Industry will play a very important role in the structure of industry in the future. During the process, the orient of Chinese sports medicine as well as the joint with preventive medicine, and the integration with data bank and cloud computing will be involved.Keywords: China, the health service industry, mass sport, data bank
Procedia PDF Downloads 628779 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset
Procedia PDF Downloads 353778 Modeling Anisotropic Damage Algorithms of Metallic Structures
Authors: Bahar Ayhan
Abstract:
The present paper is concerned with the numerical modeling of the inelastic behavior of the anisotropically damaged ductile materials, which are based on a generalized macroscopic theory within the framework of continuum damage mechanics. Kinematic decomposition of the strain rates into elastic, plastic and damage parts is basis for accomplishing the structure of continuum theory. The evolution of the damage strain rate tensor is detailed with the consideration of anisotropic effects. Helmholtz free energy functions are constructed separately for the elastic and inelastic behaviors in order to be able to address the plastic and damage process. Additionally, the constitutive structure, which is based on the standard dissipative material approach, is elaborated with stress tensor, a yield criterion for plasticity and a fracture criterion for damage besides the potential functions of each inelastic phenomenon. The finite element method is used to approximate the linearized variational problem. Stress and strain outcomes are solved by using the numerical integration algorithm based on operator split methodology with a plastic and damage (multiplicator) variable separately. Numerical simulations are proposed in order to demonstrate the efficiency of the formulation by comparing the examples in the literature.Keywords: anisotropic damage, finite element method, plasticity, coupling
Procedia PDF Downloads 206777 Data Confidentiality in Public Cloud: A Method for Inclusion of ID-PKC Schemes in OpenStack Cloud
Authors: N. Nalini, Bhanu Prakash Gopularam
Abstract:
The term data security refers to the degree of resistance or protection given to information from unintended or unauthorized access. The core principles of information security are the confidentiality, integrity and availability, also referred as CIA triad. Cloud computing services are classified as SaaS, IaaS and PaaS services. With cloud adoption the confidential enterprise data are moved from organization premises to untrusted public network and due to this the attack surface has increased manifold. Several cloud computing platforms like OpenStack, Eucalyptus, Amazon EC2 offer users to build and configure public, hybrid and private clouds. While the traditional encryption based on PKI infrastructure still works in cloud scenario, the management of public-private keys and trust certificates is difficult. The Identity based Public Key Cryptography (also referred as ID-PKC) overcomes this problem by using publicly identifiable information for generating the keys and works well with decentralized systems. The users can exchange information securely without having to manage any trust information. Another advantage is that access control (role based access control policy) information can be embedded into data unlike in PKI where it is handled by separate component or system. In OpenStack cloud platform the keystone service acts as identity service for authentication and authorization and has support for public key infrastructure for auto services. In this paper, we explain OpenStack security architecture and evaluate the PKI infrastructure piece for data confidentiality. We provide method to integrate ID-PKC schemes for securing data while in transit and stored and explain the key measures for safe guarding data against security attacks. The proposed approach uses JPBC crypto library for key-pair generation based on IEEE P1636.3 standard and secure communication to other cloud services.Keywords: data confidentiality, identity based cryptography, secure communication, open stack key stone, token scoping
Procedia PDF Downloads 384776 Developing a Framework for Open Source Software Adoption in a Higher Education Institution in Uganda. A case of Kyambogo University
Authors: Kafeero Frank
Abstract:
This study aimed at developing a frame work for open source software adoption in an institution of higher learning in Uganda, with the case of KIU as a study area. There were mainly four research questions based on; individual staff interaction with open source software forum, perceived FOSS characteristics, organizational characteristics and external characteristics as factors that affect open source software adoption. The researcher used causal-correlation research design to study effects of these variables on open source software adoption. A quantitative approach was used in this study with self-administered questionnaire on a purposively and randomly sampled sample of university ICT staff. Resultant data was analyzed using means, correlation coefficients and multivariate multiple regression analysis as statistical tools. The study reveals that individual staff interaction with open source software forum and perceived FOSS characteristics were the primary factors that significantly affect FOSS adoption while organizational and external factors were secondary with no significant effect but significant correlation to open source software adoption. It was concluded that for effective open source software adoption to occur there must be more effort on primary factors with subsequent reinforcement of secondary factors to fulfill the primary factors and adoption of open source software. Lastly recommendations were made in line with conclusions for coming up with Kyambogo University frame work for open source software adoption in institutions of higher learning. Areas of further research recommended include; Stakeholders’ analysis of open source software adoption in Uganda; Challenges and way forward. Evaluation of Kyambogo University frame work for open source software adoption in institutions of higher learning. Framework development for cloud computing adoption in Ugandan universities. Framework for FOSS development in Uganda IT industryKeywords: open source software., organisational characteristics, external characteristics, cloud computing adoption
Procedia PDF Downloads 72775 A Study of Population Growth Models and Future Population of India
Authors: Sheena K. J., Jyoti Badge, Sayed Mohammed Zeeshan
Abstract:
A Comparative Study of Exponential and Logistic Population Growth Models in India India is the second most populous city in the world, just behind China, and is going to be in the first place by next year. The Indian population has remarkably at higher rate than the other countries from the past 20 years. There were many scientists and demographers who has formulated various models of population growth in order to study and predict the future population. Some of the models are Fibonacci population growth model, Exponential growth model, Logistic growth model, Lotka-Volterra model, etc. These models have been effective in the past to an extent in predicting the population. However, it is essential to have a detailed comparative study between the population models to come out with a more accurate one. Having said that, this research study helps to analyze and compare the two population models under consideration - exponential and logistic growth models, thereby identifying the most effective one. Using the census data of 2011, the approximate population for 2016 to 2031 are calculated for 20 Indian states using both the models, compared and recorded the data with the actual population. On comparing the results of both models, it is found that logistic population model is more accurate than the exponential model, and using this model, we can predict the future population in a more effective way. This will give an insight to the researchers about the effective models of population and how effective these population models are in predicting the future population.Keywords: population growth, population models, exponential model, logistic model, fibonacci model, lotka-volterra model, future population prediction, demographers
Procedia PDF Downloads 124774 Analysis of Two Methods to Estimation Stochastic Demand in the Vehicle Routing Problem
Authors: Fatemeh Torfi
Abstract:
Estimation of stochastic demand in physical distribution in general and efficient transport routs management in particular is emerging as a crucial factor in urban planning domain. It is particularly important in some municipalities such as Tehran where a sound demand management calls for a realistic analysis of the routing system. The methodology involved critically investigating a fuzzy least-squares linear regression approach (FLLRs) to estimate the stochastic demands in the vehicle routing problem (VRP) bearing in mind the customer's preferences order. A FLLR method is proposed in solving the VRP with stochastic demands. Approximate-distance fuzzy least-squares (ADFL) estimator ADFL estimator is applied to original data taken from a case study. The SSR values of the ADFL estimator and real demand are obtained and then compared to SSR values of the nominal demand and real demand. Empirical results showed that the proposed methods can be viable in solving problems under circumstances of having vague and imprecise performance ratings. The results further proved that application of the ADFL was realistic and efficient estimator to face the stochastic demand challenges in vehicle routing system management and solve relevant problems.Keywords: fuzzy least-squares, stochastic, location, routing problems
Procedia PDF Downloads 434773 Application of Residual Correction Method on Hyperbolic Thermoelastic Response of Hollow Spherical Medium in Rapid Transient Heat Conduction
Authors: Po-Jen Su, Huann-Ming Chou
Abstract:
In this article we uses the residual correction method to deal with transient thermoelastic problems with a hollow spherical region when the continuum medium possesses spherically isotropic thermoelastic properties. Based on linear thermoelastic theory, the equations of hyperbolic heat conduction and thermoelastic motion were combined to establish the thermoelastic dynamic model with consideration of the deformation acceleration effect and non-Fourier effect under the condition of transient thermal shock. The approximate solutions of temperature and displacement distributions are obtained using the residual correction method based on the maximum principle in combination with the finite difference method, making it easier and faster to obtain upper and lower approximations of exact solutions. The proposed method is found to be an effective numerical method with satisfactory accuracy. Moreover, the result shows that the effect of transient thermal shock induced by deformation acceleration is enhanced by non-Fourier heat conduction with increased peak stress. The influence on the stress increases with the thermal relaxation time.Keywords: maximum principle, non-Fourier heat conduction, residual correction method, thermo-elastic response
Procedia PDF Downloads 425772 A Survey on Constraint Solving Approaches Using Parallel Architectures
Authors: Nebras Gharbi, Itebeddine Ghorbel
Abstract:
In the latest years and with the advancements of the multicore computing world, the constraint programming community tried to benefit from the capacity of new machines and make the best use of them through several parallel schemes for constraint solving. In this paper, we propose a survey of the different proposed approaches to solve Constraint Satisfaction Problems using parallel architectures. These approaches use in a different way a parallel architecture: the problem itself could be solved differently by several solvers or could be split over solvers.Keywords: constraint programming, parallel programming, constraint satisfaction problem, speed-up
Procedia PDF Downloads 319771 Discerning Divergent Nodes in Social Networks
Authors: Mehran Asadi, Afrand Agah
Abstract:
In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.Keywords: online social networks, data mining, social cloud computing, interaction and collaboration
Procedia PDF Downloads 157770 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing
Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto
Abstract:
Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence
Procedia PDF Downloads 382769 A Cohort Study of Early Cardiologist Consultation by Telemedicine on the Critical Non-STEMI Inpatients
Authors: Wisit Wichitkosoom
Abstract:
Objectives: To find out the more effect of early cardiologist consultation using a simple technology on the diagnosis and early proper management of patients with Non-STEMI at emergency department of district hospitals without cardiologist on site before transferred. Methods: A cohort study was performed in Udonthani general hospital at Udonthani province. From 1 October 2012–30 September 2013 with 892 patients diagnosed with Non-STEMI. All patients mean aged 46.8 years of age who had been transferred because of Non-STEMI diagnosed, over a 12 week period of studied. Patients whose transferred, in addition to receiving proper care, were offered a cardiologist consultation with average time to Udonthani hospital 1.5 hour. The main outcome measure was length of hospital stay, mortality at 3 months, inpatient investigation, and transfer rate to the higher facilitated hospital were also studied. Results: Hospital stay was significantly shorter for those didn’t consult cardiologist (hazard ratio 1.19; approximate 95% CI 1.001 to 1.251; p = 0.039). The 136 cases were transferred to higher facilitated hospital. No statistically significant in overall mortality between the groups (p=0.068). Conclusions: Early cardiologist consultant can reduce length of hospital stay for patients with cardiovascular conditions outside of cardiac center. The new basic technology can apply for the safety patient.Keywords: critical, telemedicine, safety, non STEMI
Procedia PDF Downloads 418768 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint
Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar
Abstract:
Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine
Procedia PDF Downloads 82767 The Failure and Energy Mechanism of Rock-Like Material with Single Flaw
Authors: Yu Chen
Abstract:
This paper investigates the influence of flaw on failure process of rock-like material under uniaxial compression. In laboratory, the uniaxial compression tests of intact specimens and a series of specimens within single flaw were conducted. The inclination angle of flaws includes 0°, 15°, 30°, 45°, 60°, 75° and 90°. Based on the laboratory tests, the corresponding models of numerical simulation were built and loaded in PFC2D. After analysing the crack initiation and failure modes, deformation field, and energy mechanism for both laboratory tests and numerical simulation, it can be concluded that the influence of flaws on the failure process is determined by its inclination. The characteristic stresses increase as flaw angle rising basically. The tensile cracks develop from gentle flaws (α ≤ 30°) and the shear cracks develop from other flaws. The propagation of cracks changes during failure process and the failure mode of a specimen corresponds to the orientation of the flaw. A flaw has significant influence on the transverse deformation field at the middle of the specimen, except the 75° and 90° flaw sample. The input energy, strain energy and dissipation energy of specimens show approximate increase trends with flaw angle rising and it presents large difference on the energy distribution.Keywords: failure pattern, particle deformation field, energy mechanism, PFC
Procedia PDF Downloads 213766 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data
Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin
Abstract:
The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test
Procedia PDF Downloads 298765 General Time-Dependent Sequenced Route Queries in Road Networks
Authors: Mohammad Hossein Ahmadi, Vahid Haghighatdoost
Abstract:
Spatial databases have been an active area of research over years. In this paper, we study how to answer the General Time-Dependent Sequenced Route queries. Given the origin and destination of a user over a time-dependent road network graph, an ordered list of categories of interests and a departure time interval, our goal is to find the minimum travel time path along with the best departure time that minimizes the total travel time from the source location to the given destination passing through a sequence of points of interests belonging to each of the specified categories of interest. The challenge of this problem is the added complexity to the optimal sequenced route queries, where we assume that first the road network is time dependent, and secondly the user defines a departure time interval instead of one single departure time instance. For processing general time-dependent sequenced route queries, we propose two solutions as Discrete-Time and Continuous-Time Sequenced Route approaches, finding approximate and exact solutions, respectively. Our proposed approaches traverse the road network based on A*-search paradigm equipped with an efficient heuristic function, for shrinking the search space. Extensive experiments are conducted to verify the efficiency of our proposed approaches.Keywords: trip planning, time dependent, sequenced route query, road networks
Procedia PDF Downloads 321764 Optimality of Shapley Value Mechanism under Sybil Strategies
Authors: Bruno Mazorra Roig
Abstract:
In the realm of cost-sharing mechanisms, the vulnerability to Sybil strategies, where agents can create fake identities to manipulate outcomes, has not yet been studied. In this paper, we delve into the intricacies of different cost-sharing mechanisms proposed in the literature, highlighting its non-Sybil-resistance nature. Furthermore, we prove that under mild conditions, a Sybil-proof cost-sharing mechanism for public excludable goods is at least (n/2 + 1)−approximate. This finding reveals an exponential increase in the worst-case social cost in environments where agents are restricted from using Sybil strategies. We introduce the concept of Sybil Welfare Invariant mechanisms, where a mechanism maintains its worst-case welfare under Sybil strategies for every set of prior beliefs with full support even when the mechanism is not Sybil-proof. Finally, we prove that the Shapley value mechanism for public excludable goods holds this property and so deduce that the worst-case social cost of this mechanism is the nth harmonic number Hn under the equilibrium of the game with Sybil strategies, matching the worst-case social cost bound for cost-sharing mechanisms. This finding carries important implications for decentralized autonomous organizations (DAOs), indicating that they are capable of funding public excludable goods efficiently, even when the total number of agents is unknown.Keywords: game theory, mechanism design, cost sharing, false-name proofness
Procedia PDF Downloads 64763 Solution of Singularly Perturbed Differential Difference Equations Using Liouville Green Transformation
Authors: Y. N. Reddy
Abstract:
The class of differential-difference equations which have characteristics of both classes, i.e., delay/advance and singularly perturbed behaviour is known as singularly perturbed differential-difference equations. The expression ‘positive shift’ and ‘negative shift’ are also used for ‘advance’ and ‘delay’ respectively. In general, an ordinary differential equation in which the highest order derivative is multiplied by a small positive parameter and containing at least one delay/advance is known as singularly perturbed differential-difference equation. Singularly perturbed differential-difference equations arise in the modelling of various practical phenomena in bioscience, engineering, control theory, specifically in variational problems, in describing the human pupil-light reflex, in a variety of models for physiological processes or diseases and first exit time problems in the modelling of the determination of expected time for the generation of action potential in nerve cells by random synaptic inputs in dendrites. In this paper, we envisage the use of Liouville Green Transformation to find the solution of singularly perturbed differential difference equations. First, using Taylor series, the given singularly perturbed differential difference equation is approximated by an asymptotically equivalent singularly perturbation problem. Then the Liouville Green Transformation is applied to get the solution. Several model examples are solved, and the results are compared with other methods. It is observed that the present method gives better approximate solutions.Keywords: difference equations, differential equations, singular perturbations, boundary layer
Procedia PDF Downloads 199