Search results for: cluster model approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26783

Search results for: cluster model approach

25943 Constrained RGBD SLAM with a Prior Knowledge of the Environment

Authors: Kathia Melbouci, Sylvie Naudet Collette, Vincent Gay-Bellile, Omar Ait-Aider, Michel Dhome

Abstract:

In this paper, we handle the problem of real time localization and mapping in indoor environment assisted by a partial prior 3D model, using an RGBD sensor. The proposed solution relies on a feature-based RGBD SLAM algorithm to localize the camera and update the 3D map of the scene. To improve the accuracy and the robustness of the localization, we propose to combine in a local bundle adjustment process, geometric information provided by a prior coarse 3D model of the scene (e.g. generated from the 2D floor plan of the building) along with RGBD data from a Kinect camera. The proposed approach is evaluated on a public benchmark dataset as well as on real scene acquired by a Kinect sensor.

Keywords: SLAM, global localization, 3D sensor, bundle adjustment, 3D model

Procedia PDF Downloads 387
25942 Standard Resource Parameter Based Trust Model in Cloud Computing

Authors: Shyamlal Kumawat

Abstract:

Cloud computing is shifting the approach IT capital are utilized. Cloud computing dynamically delivers convenient, on-demand access to shared pools of software resources, platform and hardware as a service through internet. The cloud computing model—made promising by sophisticated automation, provisioning and virtualization technologies. Users want the ability to access these services including infrastructure resources, how and when they choose. To accommodate this shift in the consumption model technology has to deal with the security, compatibility and trust issues associated with delivering that convenience to application business owners, developers and users. Absent of these issues, trust has attracted extensive attention in Cloud computing as a solution to enhance the security. This paper proposes a trusted computing technology through Standard Resource parameter Based Trust Model in Cloud Computing to select the appropriate cloud service providers. The direct trust of cloud entities is computed on basis of the interaction evidences in past and sustained on its present performances. Various SLA parameters between consumer and provider are considered in trust computation and compliance process. The simulations are performed using CloudSim framework and experimental results show that the proposed model is effective and extensible.

Keywords: cloud, Iaas, Saas, Paas

Procedia PDF Downloads 318
25941 The Impact of Autonomous Driving on Cities of the Future: A Literature Review

Authors: Maximilian A. Richter

Abstract:

The public authority needs to understand the role and impacts of autonomous vehicle (AV) on the mobility system. At present, however, research shows that the impact of AV on cities varies. As a consequence, it is difficult to make recommendations to policymakers on how they should prepare for the future when so much remains unknown about this technology. The study aims to provide an overview of the literature on how autonomous vehicles will affect the cities and traffic of the future. To this purpose, the most important studies are first selected, and their results summarized. Further on, it will be clarified which advantages AV have for cities and how it can lead to an improvement in the current problems/challenges of cities. To achieve the research aim and objectives, this paper approaches a literature review. For this purpose, in a first step, the most important studies are extracted. This is limited to studies that are peer-reviewed and have been published in high-ranked journals such as the Journal of Transportation: Part A. In step 2, the most important key performance indicator (KPIs) (such as traffic volume or energy consumption) are selected from the literature research. Due to the fact that different terms are used in the literature for similar statements/KPIs, these must first be clustered. Furthermore, for each cluster, the changes from the respective studies are compiled, as well as their survey methodology. In step 3, a sensitivity analysis per cluster is made. Here, it will be analyzed how the different studies come to their findings and on which assumptions, scenarios, and methods these calculations are based. From the results of the sensitivity analysis, the success factors for the implementation of autonomous vehicles are drawn, and statements are made under which conditions AVs can be successful.

Keywords: autonomous vehicles, city of the future, literature review, traffic simulations

Procedia PDF Downloads 90
25940 Formal Verification for Ethereum Smart Contract Using Coq

Authors: Xia Yang, Zheng Yang, Haiyong Sun, Yan Fang, Jingyu Liu, Jia Song

Abstract:

The smart contract in Ethereum is a unique program deployed on the Ethereum Virtual Machine (EVM) to help manage cryptocurrency. The security of this smart contract is critical to Ethereum’s operation and highly sensitive. In this paper, we present a formal model for smart contract, using the separated term-obligation (STO) strategy to formalize and verify the smart contract. We use the IBM smart sponsor contract (SSC) as an example to elaborate the detail of the formalizing process. We also propose a formal smart sponsor contract model (FSSCM) and verify SSC’s security properties with an interactive theorem prover Coq. We found the 'Unchecked-Send' vulnerability in the SSC, using our formal model and verification method. Finally, we demonstrate how we can formalize and verify other smart contracts with this approach, and our work indicates that this formal verification can effectively verify the correctness and security of smart contracts.

Keywords: smart contract, formal verification, Ethereum, Coq

Procedia PDF Downloads 658
25939 An Approach to Analyze Testing of Nano On-Chip Networks

Authors: Farnaz Fotovvatikhah, Javad Akbari

Abstract:

Test time of a test architecture is an important factor which depends on the architecture's delay and test patterns. Here a new architecture to store the test results based on network on chip is presented. In addition, simple analytical model is proposed to calculate link test time for built in self-tester (BIST) and external tester (Ext) in multiprocessor systems. The results extracted from the model are verified using FPGA implementation and experimental measurements. Systems consisting 16, 25, and 36 processors are implemented and simulated and test time is calculated. In addition, BIST and Ext are compared in terms of test time at different conditions such as at different number of test patterns and nodes. Using the model the maximum frequency of testing could be calculated and the test structure could be optimized for high speed testing.

Keywords: test, nano on-chip network, JTAG, modelling

Procedia PDF Downloads 465
25938 Phonological Characteristics of Severe to Profound Hearing Impaired Children

Authors: Akbar Darouie, Mamak Joulaie

Abstract:

In regard of phonological skills development importance and its influence on other aspects of language, this study has been performed. Determination of some phonological indexes in children with hearing impairment and comparison with hearing children was the objective. A sample of convenience was selected from a rehabilitation center and a kindergarten in Karaj, Iran. Participants consisted of 12 hearing impaired and 12 hearing children (age range: 5 years and 6 months to 6 years and 6 months old). Hearing impaired children suffered from severe to profound hearing loss while three of them were cochlear implanted and the others were wearing hearing aids. Conversational speech of these children was recorded and 50 first utterances were selected to analyze. Percentage of consonant correct (PCC) and vowel correct (PVC), initial and final consonant omission error, cluster consonant omission error and syllabic structure variety were compared in two groups. Data were analyzed with t test (version 16th SPSS). Comparison between PCC and PVC averages in two groups showed a significant difference (P< 0/01). There was a significant difference about final consonant emission error (P<0/001) and initial consonant emission error (P<0/01) too. Also, the differences between two groups on cluster consonant omission were significant (P<0/001). Therefore, some changes were seen in syllabic structures in children with hearing impairment compared to typical group. This study demonstrates some phonological differences in Farsi language between two groups of children. Therefore, it seems, in clinical practices we must notice this issue.

Keywords: hearing impairment, phonology, vowel, consonant

Procedia PDF Downloads 225
25937 An Experimental Study on Some Conventional and Hybrid Models of Fuzzy Clustering

Authors: Jeugert Kujtila, Kristi Hoxhalli, Ramazan Dalipi, Erjon Cota, Ardit Murati, Erind Bedalli

Abstract:

Clustering is a versatile instrument in the analysis of collections of data providing insights of the underlying structures of the dataset and enhancing the modeling capabilities. The fuzzy approach to the clustering problem increases the flexibility involving the concept of partial memberships (some value in the continuous interval [0, 1]) of the instances in the clusters. Several fuzzy clustering algorithms have been devised like FCM, Gustafson-Kessel, Gath-Geva, kernel-based FCM, PCM etc. Each of these algorithms has its own advantages and drawbacks, so none of these algorithms would be able to perform superiorly in all datasets. In this paper we will experimentally compare FCM, GK, GG algorithm and a hybrid two-stage fuzzy clustering model combining the FCM and Gath-Geva algorithms. Firstly we will theoretically dis-cuss the advantages and drawbacks for each of these algorithms and we will describe the hybrid clustering model exploiting the advantages and diminishing the drawbacks of each algorithm. Secondly we will experimentally compare the accuracy of the hybrid model by applying it on several benchmark and synthetic datasets.

Keywords: fuzzy clustering, fuzzy c-means algorithm (FCM), Gustafson-Kessel algorithm, hybrid clustering model

Procedia PDF Downloads 496
25936 Magneto-Rheological Damper Based Semi-Active Robust H∞ Control of Civil Structures with Parametric Uncertainties

Authors: Vedat Senol, Gursoy Turan, Anders Helmersson, Vortechz Andersson

Abstract:

In developing a mathematical model of a real structure, the simulation results of the model may not match the real structural response. This is a general problem that arises during dynamic motion of the structure, which may be modeled by means of parameter variations in the stiffness, damping, and mass matrices. These changes in parameters need to be estimated, and the mathematical model is updated to obtain higher control performances and robustness. In this study, a linear fractional transformation (LFT) is utilized for uncertainty modeling. Further, a general approach to the design of an H∞ control of a magneto-rheological damper (MRD) for vibration reduction in a building with mass, damping, and stiffness uncertainties is presented.

Keywords: uncertainty modeling, structural control, MR Damper, H∞, robust control

Procedia PDF Downloads 128
25935 Time-Dependent Density Functional Theory of an Oscillating Electron Density around a Nanoparticle

Authors: Nilay K. Doshi

Abstract:

A theoretical probe describing the excited energy states of the electron density surrounding a nanoparticle (NP) is presented. An electromagnetic (EM) wave interacts with a NP much smaller than the incident wavelength. The plasmon that oscillates locally around the NP comprises of excited conduction electrons. The system is based on the Jellium model of a cluster of metal atoms. Hohenberg-Kohn (HK) equations and the variational Kohn-Sham (SK) scheme have been used to obtain the NP electron density in the ground state. Furthermore, a time-dependent density functional (TDDFT) theory is used to treat the excited states in a density functional theory (DFT) framework. The non-interacting fermionic kinetic energy is shown to be a functional of the electron density. The time dependent potential is written as the sum of the nucleic potential and the incoming EM field. This view of the quantum oscillation of the electron density is a part of the localized surface plasmon resonance.

Keywords: electron density, energy, electromagnetic, DFT, TDDFT, plasmon, resonance

Procedia PDF Downloads 315
25934 Enhancing Project Performance Forecasting using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

Accurate forecasting of project performance metrics is crucial for successfully managing and delivering urban road reconstruction projects. Traditional methods often rely on static baseline plans and fail to consider the dynamic nature of project progress and external factors. This research proposes a machine learning-based approach to forecast project performance metrics, such as cost variance and earned value, for each Work Breakdown Structure (WBS) category in an urban road reconstruction project. The proposed model utilizes time series forecasting techniques, including Autoregressive Integrated Moving Average (ARIMA) and Long Short-Term Memory (LSTM) networks, to predict future performance based on historical data and project progress. The model also incorporates external factors, such as weather patterns and resource availability, as features to enhance the accuracy of forecasts. By applying the predictive power of machine learning, the performance forecasting model enables proactive identification of potential deviations from the baseline plan, which allows project managers to take timely corrective actions. The research aims to validate the effectiveness of the proposed approach using a case study of an urban road reconstruction project, comparing the model's forecasts with actual project performance data. The findings of this research contribute to the advancement of project management practices in the construction industry, offering a data-driven solution for improving project performance monitoring and control.

Keywords: project performance forecasting, machine learning, time series forecasting, cost variance, earned value management

Procedia PDF Downloads 23
25933 The Family as an Agent for Change in Aerobic Activity and Obesity in Grade 2-3 Schoolchildren

Authors: T. Goldstein, E. Serok, J. D. Kark

Abstract:

Background and Aim: The prevalence of obesity is increasing worldwide and in Israel. To meet this challenge, our study tests a new educational approach through a controlled school-based trial to achieve an improvement in eating habits, aerobic activity, and reduced obesity in Grades 2-3. Methods and Design: A cluster randomized controlled trial allocated 4 elementary schools (3rd and 2nd-grade classes each) to intervention or control groups. This allocation was switched with the next cohort of children. Recruitment was in first grade, randomization at the beginning of second grade, evaluation of results at the end of second grade and the beginning of third grade — intervention: 5 joint parent-children classroom activities on health topics and 5 educational workshops for parents only. Alfred Adler's concepts were guiding principles. Subjects: Of 743 children in 23-second grade classes, parents provided informed consent for 508 (68%). Information of retention health habits continued for third grade. Additional parental approvals were required. Parents provided informed consent for third-grade follow-up for 432. Results: At the end of 2nd grade, the amount of aerobic activity increased in the intervention group in comparison with the control group, the difference being marginally statistically significant (p=0.061). There is a significant difference between the groups in the percentage of "no activity being done" reported at the end of second grade when in the experimental group, the percentage is lower than the control. There are differences between genders in the percentage of aerobic activity at the end of second grade (p=0.044) and in the third grade (p < 0.0001). Height increased significantly (p=0.030 ), and waist circumference declined significantly (p=0.021) in the intervention compared with the control group. There were no significant between-group differences in BMI and weight. Conclusion: There were encouraging changes in aerobic activity and in anthropometric measurements. To maintain changes over longer periods, refreshing these nutrition and activity themes annually in school using the model is required.

Keywords: aerobic activity, child obesity, Alfred Adler, schoolchildren

Procedia PDF Downloads 137
25932 Modern Imputation Technique for Missing Data in Linear Functional Relationship Model

Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, Rahmatullah Imon

Abstract:

Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in the LFRM. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.

Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators

Procedia PDF Downloads 378
25931 Predicting Indonesia External Debt Crisis: An Artificial Neural Network Approach

Authors: Riznaldi Akbar

Abstract:

In this study, we compared the performance of the Artificial Neural Network (ANN) model with back-propagation algorithm in correctly predicting in-sample and out-of-sample external debt crisis in Indonesia. We found that exchange rate, foreign reserves, and exports are the major determinants to experiencing external debt crisis. The ANN in-sample performance provides relatively superior results. The ANN model is able to classify correctly crisis of 89.12 per cent with reasonably low false alarms of 7.01 per cent. In out-of-sample, the prediction performance fairly deteriorates compared to their in-sample performances. It could be explained as the ANN model tends to over-fit the data in the in-sample, but it could not fit the out-of-sample very well. The 10-fold cross-validation has been used to improve the out-of-sample prediction accuracy. The results also offer policy implications. The out-of-sample performance could be very sensitive to the size of the samples, as it could yield a higher total misclassification error and lower prediction accuracy. The ANN model could be used to identify past crisis episodes with some accuracy, but predicting crisis outside the estimation sample is much more challenging because of the presence of uncertainty.

Keywords: debt crisis, external debt, artificial neural network, ANN

Procedia PDF Downloads 426
25930 Model of MSD Risk Assessment at Workplace

Authors: K. Sekulová, M. Šimon

Abstract:

This article focuses on upper-extremity musculoskeletal disorders risk assessment model at workplace. In this model are used risk factors that are responsible for musculoskeletal system damage. Based on statistic calculations the model is able to define what risk of MSD threatens workers who are under risk factors. The model is also able to say how MSD risk would decrease if these risk factors are eliminated.

Keywords: ergonomics, musculoskeletal disorders, occupational diseases, risk factors

Procedia PDF Downloads 535
25929 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 101
25928 Identification of Classes of Bilinear Time Series Models

Authors: Anthony Usoro

Abstract:

In this paper, two classes of bilinear time series model are obtained under certain conditions from the general bilinear autoregressive moving average model. Bilinear Autoregressive (BAR) and Bilinear Moving Average (BMA) Models have been identified. From the general bilinear model, BAR and BMA models have been proved to exist for q = Q = 0, => j = 0, and p = P = 0, => i = 0 respectively. These models are found useful in modelling most of the economic and financial data.

Keywords: autoregressive model, bilinear autoregressive model, bilinear moving average model, moving average model

Procedia PDF Downloads 389
25927 Finding the Longest Common Subsequence in Normal DNA and Disease Affected Human DNA Using Self Organizing Map

Authors: G. Tamilpavai, C. Vishnuppriya

Abstract:

Bioinformatics is an active research area which combines biological matter as well as computer science research. The longest common subsequence (LCSS) is one of the major challenges in various bioinformatics applications. The computation of the LCSS plays a vital role in biomedicine and also it is an essential task in DNA sequence analysis in genetics. It includes wide range of disease diagnosing steps. The objective of this proposed system is to find the longest common subsequence which presents in a normal and various disease affected human DNA sequence using Self Organizing Map (SOM) and LCSS. The human DNA sequence is collected from National Center for Biotechnology Information (NCBI) database. Initially, the human DNA sequence is separated as k-mer using k-mer separation rule. Mean and median values are calculated from each separated k-mer. These calculated values are fed as input to the Self Organizing Map for the purpose of clustering. Then obtained clusters are given to the Longest Common Sub Sequence (LCSS) algorithm for finding common subsequence which presents in every clusters. It returns nx(n-1)/2 subsequence for each cluster where n is number of k-mer in a specific cluster. Experimental outcomes of this proposed system produce the possible number of longest common subsequence of normal and disease affected DNA data. Thus the proposed system will be a good initiative aid for finding disease causing sequence. Finally, performance analysis is carried out for different DNA sequences. The obtained values show that the retrieval of LCSS is done in a shorter time than the existing system.

Keywords: clustering, k-mers, longest common subsequence, SOM

Procedia PDF Downloads 246
25926 Heritability and Diversity Analysis of Blast Resistant Upland Rice Genotypes Based on Quantitative Traits

Authors: Mst. Tuhina-Khatun, Mohamed Hanafi Musa, Mohd Rafii Yosup, Wong Mui Yun, Md. Aktar-Uz-Zaman, Mahbod Sahebi

Abstract:

Rice is a staple crop of economic importance of most Asian people, and blast is the major constraints for its higher yield. Heritability of plants traits helps plant breeders to make an appropriate selection and to assess the magnitude of genetic improvement through hybridization. Diversity of crop plants is necessary to manage the continuing genetic erosion and address the issues of genetic conservation for successfully meet the future food requirements. Therefore, an experiment was conducted to estimate heritability and to determine the diversity of 27 blast resistant upland rice genotypes based on 18 quantitative traits using randomized complete block design. Heritability value was found to vary from 38 to 93%. The lowest heritability belonged to the character total number of tillers/plant (38%). In contrast, number of filled grains/panicle, and yield/plant (g) was recorded for their highest heritability value viz. 93 and 91% correspondingly. Cluster analysis based on 18 traits grouped 27 rice genotypes into six clusters. Cluster I was the biggest, which comprised 17 genotypes, accounted for about 62.96% of total population. The multivariate analysis suggested that the genotype ‘Chokoto 14’ could be hybridized with ‘IR 5533-55-1-11’ and ‘IR 5533-PP 854-1’ for broadening the gene pool of blast resistant upland rice germplasms for yield and other favorable characters.

Keywords: blast resistant, diversity analysis, heritability, upland rice

Procedia PDF Downloads 358
25925 Developing a Model of Teaching Writing Based On Reading Approach through Reflection Strategy for EFL Students of STKIP YPUP

Authors: Eny Syatriana, Ardiansyah

Abstract:

The purpose of recent study was to develop a learning model on writing, based on the reading texts which will be read by the students using reflection strategy. The strategy would allow the students to read the text and then they would write back the main idea and to develop the text by using their own sentences. So, the writing practice was begun by reading an interesting text, then the students would develop the text which has been read into their writing. The problem questions are (1) what kind of learning model that can develop the students writing ability? (2) what is the achievement of the students of STKIP YPUP through reflection strategy? (3) is the using of the strategy effective to develop students competence In writing? (4) in what level are the students interest toward the using of a strategy In writing subject? This development research consisted of some steps, they are (1) need analysis (2) model design (3) implementation (4) model evaluation. The need analysis was applied through discussion among the writing lecturers to create a learning model for writing subject. To see the effectiveness of the model, an experiment would be delivered for one class. The instrument and learning material would be validated by the experts. In every steps of material development, there was a learning process, where would be validated by an expert. The research used development design. These Principles and procedures or research design and development .This study, researcher would do need analysis, creating prototype, content validation, and limited empiric experiment to the sample. In each steps, there should be an assessment and revision to the drafts before continue to the next steps. The second year, the prototype would be tested empirically to four classes in STKIP YPUP for English department. Implementing the test greatly was done through the action research and followed by evaluation and validation from the experts.

Keywords: learning model, reflection, strategy, reading, writing, development

Procedia PDF Downloads 351
25924 Analytical Modeling of Globular Protein-Ferritin in α-Helical Conformation: A White Noise Functional Approach

Authors: Vernie C. Convicto, Henry P. Aringa, Wilson I. Barredo

Abstract:

This study presents a conformational model of the helical structures of globular protein particularly ferritin in the framework of white noise path integral formulation by using Associated Legendre functions, Bessel and convolution of Bessel and trigonometric functions as modulating functions. The model incorporates chirality features of proteins and their helix-turn-helix sequence structural motif.

Keywords: globular protein, modulating function, white noise, winding probability

Procedia PDF Downloads 456
25923 Semi-Empirical Modeling of Heat Inactivation of Enterococci and Clostridia During the Hygienisation in Anaerobic Digestion Process

Authors: Jihane Saad, Thomas Lendormi, Caroline Le Marechal, Anne-marie Pourcher, Céline Druilhe, Jean-louis Lanoiselle

Abstract:

Agricultural anaerobic digestion consists in the conversion of animal slurry and manure into biogas and digestate. They need, however, to be treated at 70 ºC during 60 min before anaerobic digestion according to the European regulation (EC n°1069/2009 & EU n°142/2011). The impact of such heat treatment on the outcome of bacteria has been poorly studied up to now. Moreover, a recent study¹ has shown that enterococci and clostridia are still detected despite the application of such thermal treatment, questioning the relevance of this approach for the hygienisation of digestate. The aim of this study is to establish the heat inactivation kinetics of two species of enterococci (Enterococcus faecalis and Enterococcus faecium) and two species of clostridia (Clostridioides difficile and Clostridium novyi as a non-toxic model for Clostridium botulinum of group III). A pure culture of each strain was prepared in a specific sterile medium at concentration of 10⁴ – 10⁷ MPN / mL (Most Probable number), depending on the bacterial species. Bacterial suspensions were then filled in sterilized capillary tubes and placed in a water or oil bath at desired temperature for a specific period of time. Each bacterial suspension was enumerated using a MPN approach, and tests were repeated three times for each temperature/time couple. The inactivation kinetics of the four indicator bacteria is described using the Weibull model and the classical Bigelow model of first-order kinetics. The Weibull model takes biological variation, with respect to thermal inactivation, into account and is basically a statistical model of distribution of inactivation times as the classical first-order approach is a special case of the Weibull model. The heat treatment at 70 ºC / 60 min contributes to a reduction greater than 5 log10 for E. faecium and E. faecalis. However, it results only in a reduction of about 0.7 log10 for C. difficile and an increase of 0.5 log10 for C. novyi. Application of treatments at higher temperatures is required to reach a reduction greater or equal to 3 log10 for C. novyi (such as 30 min / 100 ºC, 13 min / 105 ºC, 3 min / 110 ºC, and 1 min / 115 ºC), raising the question of the relevance of the application of heat treatment at 70 ºC / 60 min for these spore-forming bacteria. To conclude, the heat treatment (70 ºC / 60 min) defined by the European regulation is sufficient to inactivate non-sporulating bacteria. Higher temperatures (> 100 ºC) are required as far as spore-forming bacteria concerns to reach a 3 log10 reduction (sporicidal activity).

Keywords: heat treatment, enterococci, clostridia, inactivation kinetics

Procedia PDF Downloads 92
25922 A Nonlinear Visco-Hyper Elastic Constitutive Model for Modelling Behavior of Polyurea at Large Deformations

Authors: Shank Kulkarni, Alireza Tabarraei

Abstract:

The fantastic properties of polyurea such as flexibility, durability, and chemical resistance have brought it a wide range of application in various industries. Effective prediction of the response of polyurea under different loading and environmental conditions necessitates the development of an accurate constitutive model. Similar to most polymers, the behavior of polyurea depends on both strain and strain rate. Therefore, the constitutive model should be able to capture both these effects on the response of polyurea. To achieve this objective, in this paper, a nonlinear hyper-viscoelastic constitutive model is developed by the superposition of a hyperelastic and a viscoelastic model. The proposed constitutive model can capture the behavior of polyurea under compressive loading conditions at various strain rates. Four parameter Ogden model and Mooney Rivlin model are used to modeling the hyperelastic behavior of polyurea. The viscoelastic behavior is modeled using both a three-parameter standard linear solid (SLS) model and a K-BKZ model. Comparison of the modeling results with experiments shows that Odgen and SLS model can more accurately predict the behavior of polyurea. The material parameters of the model are found by curve fitting of the proposed model to the uniaxial compression test data. The proposed model can closely reproduce the stress-strain behavior of polyurea for strain rates up to 6500 /s.

Keywords: constitutive modelling, ogden model, polyurea, SLS model, uniaxial compression test

Procedia PDF Downloads 225
25921 Simulating the Dynamics of E-waste Production from Mobile Phone: Model Development and Case Study of Rwanda

Authors: Rutebuka Evariste, Zhang Lixiao

Abstract:

Mobile phone sales and stocks showed an exponential growth in the past years globally and the number of mobile phones produced each year was surpassing one billion in 2007, this soaring growth of related e-waste deserves sufficient attentions paid to it regionally and globally as long as 40% of its total weight is made from metallic which 12 elements are identified to be highly hazardous and 12 are less harmful. Different research and methods have been used to estimate the obsolete mobile phones but none has developed a dynamic model and handle the discrepancy resulting from improper approach and error in the input data. The study aim was to develop a comprehensive dynamic system model for simulating the dynamism of e-waste production from mobile phone regardless the country or region and prevail over the previous errors. The logistic model method combined with STELLA program has been used to carry out this study. Then the simulation for Rwanda has been conducted and compared with others countries’ results as model testing and validation. Rwanda is about 1.5 million obsoletes mobile phone with 125 tons of waste in 2014 with e-waste production peak in 2017. It is expected to be 4.17 million obsoletes with 351.97 tons by 2020 along with environmental impact intensity of 21times to 2005. Thus, it is concluded through the model testing and validation that the present dynamic model is competent and able deal with mobile phone e-waste production the fact that it has responded to the previous studies questions from Czech Republic, Iran, and China.

Keywords: carrying capacity, dematerialization, logistic model, mobile phone, obsolescence, similarity, Stella, system dynamics

Procedia PDF Downloads 328
25920 A Bi-Objective Stochastic Mathematical Model for Agricultural Supply Chain Network

Authors: Mohammad Mahdi Paydar, Armin Cheraghalipour, Mostafa Hajiaghaei-Keshteli

Abstract:

Nowadays, in advanced countries, agriculture as one of the most significant sectors of the economy, plays an important role in its political and economic independence. Due to farmers' lack of information about products' demand and lack of proper planning for harvest time, annually the considerable amount of products is corrupted. Besides, in this paper, we attempt to improve these unfavorable conditions via designing an effective supply chain network that tries to minimize total costs of agricultural products along with minimizing shortage in demand points. To validate the proposed model, a stochastic optimization approach by using a branch and bound solver of the LINGO software is utilized. Furthermore, to accumulate the data of parameters, a case study in Mazandaran province placed in the north of Iran has been applied. Finally, using ɛ-constraint approach, a Pareto front is obtained and one of its Pareto solutions as best solution is selected. Then, related results of this solution are explained. Finally, conclusions and suggestions for the future research are presented.

Keywords: perishable products, stochastic optimization, agricultural supply chain, ɛ-constraint

Procedia PDF Downloads 348
25919 Reducing Component Stress during Encapsulation of Electronics: A Simulative Examination of Thermoplastic Foam Injection Molding

Authors: Constantin Ott, Dietmar Drummer

Abstract:

The direct encapsulation of electronic components is an effective way of protecting components against external influences. In addition to achieving a sufficient protective effect, there are two other big challenges for satisfying the increasing demand for encapsulated circuit boards. The encapsulation process should be both suitable for mass production and offer a low component load. Injection molding is a method with good suitability for large series production but also with typically high component stress. In this article, two aims were pursued: first, the development of a calculation model that allows an estimation of the occurring forces based on process variables and material parameters. Second, the evaluation of a new approach for stress reduction by means of thermoplastic foam injection molding. For this purpose, simulation-based process data was generated with the Moldflow simulation tool. Based on this, component stresses were calculated with the calculation model. At the same time, this paper provided a model for estimating the forces occurring during overmolding and derived a solution method for reducing these forces. The suitability of this approach was clearly demonstrated and a significant reduction in shear forces during overmolding was achieved. It was possible to demonstrate a process development that makes it possible to meet the two main requirements of direct encapsulation in addition to a high protective effect.

Keywords: encapsulation, stress reduction, foam-injection-molding, simulation

Procedia PDF Downloads 113
25918 Learning Algorithms for Fuzzy Inference Systems Composed of Double- and Single-Input Rule Modules

Authors: Hirofumi Miyajima, Kazuya Kishida, Noritaka Shigei, Hiromi Miyajima

Abstract:

Most of self-tuning fuzzy systems, which are automatically constructed from learning data, are based on the steepest descent method (SDM). However, this approach often requires a large convergence time and gets stuck into a shallow local minimum. One of its solutions is to use fuzzy rule modules with a small number of inputs such as DIRMs (Double-Input Rule Modules) and SIRMs (Single-Input Rule Modules). In this paper, we consider a (generalized) DIRMs model composed of double and single-input rule modules. Further, in order to reduce the redundant modules for the (generalized) DIRMs model, pruning and generative learning algorithms for the model are suggested. In order to show the effectiveness of them, numerical simulations for function approximation, Box-Jenkins and obstacle avoidance problems are performed.

Keywords: Box-Jenkins's problem, double-input rule module, fuzzy inference model, obstacle avoidance, single-input rule module

Procedia PDF Downloads 338
25917 A Constitutive Model for Time-Dependent Behavior of Clay

Authors: T. N. Mac, B. Shahbodaghkhan, N. Khalili

Abstract:

A new elastic-viscoplastic (EVP) constitutive model is proposed for the analysis of time-dependent behavior of clay. The proposed model is based on the bounding surface plasticity and the concept of viscoplastic consistency framework to establish continuous transition from plasticity to rate dependent viscoplasticity. Unlike the overstress based models, this model will meet the consistency condition in formulating the constitutive equation for EVP model. The procedure of deriving the constitutive relationship is also presented. Simulation results and comparisons with experimental data are then presented to demonstrate the performance of the model.

Keywords: bounding surface, consistency theory, constitutive model, viscosity

Procedia PDF Downloads 475
25916 Predicting Shot Making in Basketball Learnt Fromadversarial Multiagent Trajectories

Authors: Mark Harmon, Abdolghani Ebrahimi, Patrick Lucey, Diego Klabjan

Abstract:

In this paper, we predict the likelihood of a player making a shot in basketball from multiagent trajectories. Previous approaches to similar problems center on hand-crafting features to capture domain-specific knowledge. Although intuitive, recent work in deep learning has shown, this approach is prone to missing important predictive features. To circumvent this issue, we present a convolutional neural network (CNN) approach where we initially represent the multiagent behavior as an image. To encode the adversarial nature of basketball, we use a multichannel image which we then feed into a CNN. Additionally, to capture the temporal aspect of the trajectories, we use “fading.” We find that this approach is superior to a traditional FFN model. By using gradient ascent, we were able to discover what the CNN filters look for during training. Last, we find that a combined FFN+CNN is the best performing network with an error rate of 39%.

Keywords: basketball, computer vision, image processing, convolutional neural network

Procedia PDF Downloads 134
25915 A Multilevel Approach for Stroke Prediction Combining Risk Factors and Retinal Images

Authors: Jeena R. S., Sukesh Kumar A.

Abstract:

Stroke is one of the major reasons of adult disability and morbidity in many of the developing countries like India. Early diagnosis of stroke is essential for timely prevention and cure. Various conventional statistical methods and computational intelligent models have been developed for predicting the risk and outcome of stroke. This research work focuses on a multilevel approach for predicting the occurrence of stroke based on various risk factors and invasive techniques like retinal imaging. This risk prediction model can aid in clinical decision making and help patients to have an improved and reliable risk prediction.

Keywords: prediction, retinal imaging, risk factors, stroke

Procedia PDF Downloads 283
25914 Analytical Hierarchical Process for Multi-Criteria Decision-Making

Authors: Luis Javier Serrano Tamayo

Abstract:

This research on technology makes a first approach to the selection of an amphibious landing ship with strategic capabilities, through the implementation of a multi-criteria model using Analytical Hierarchical Process (AHP), in which a significant group of alternatives of latest technology has been considered. The variables were grouped at different levels to match design and performance characteristics, which affect the lifecycle as well as the acquisition, maintenance and operational costs. The model yielded an overall measure of effectiveness and an overall measure of cost of each kind of ship that was compared each other inside the model and showed in a Pareto chart. The modeling was developed using the Expert Choice software, based on AHP method.

Keywords: analytic hierarchy process, multi-criteria decision-making, Pareto analysis, Colombian Marine Corps, projection operations, expert choice, amphibious landing ship

Procedia PDF Downloads 534