Search results for: time prediction algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20463

Search results for: time prediction algorithms

18243 Personalization of Context Information Retrieval Model via User Search Behaviours for Ranking Document Relevance

Authors: Kehinde Agbele, Longe Olumide, Daniel Ekong, Dele Seluwa, Akintoye Onamade

Abstract:

One major problem of most existing information retrieval systems (IRS) is that they provide even access and retrieval results to individual users specially based on the query terms user issued to the system. When using IRS, users often present search queries made of ad-hoc keywords. It is then up to IRS to obtain a precise representation of user’s information need, and the context of the information. In effect, the volume and range of the Internet documents is growing exponentially and consequently causes difficulties for a user to obtain information that precisely matches the user interest. Diverse combination techniques are used to achieve the specific goal. This is due, firstly, to the fact that users often do not present queries to IRS that optimally represent the information they want, and secondly, the measure of a document's relevance is highly subjective between diverse users. In this paper, we address the problem by investigating the optimization of IRS to individual information needs in order of relevance. The paper addressed the development of algorithms that optimize the ranking of documents retrieved from IRS. This paper addresses this problem with a two-fold approach in order to retrieve domain-specific documents. Firstly, the design of context of information. The context of a query determines retrieved information relevance using personalization and context-awareness. Thus, executing the same query in diverse contexts often leads to diverse result rankings based on the user preferences. Secondly, the relevant context aspects should be incorporated in a way that supports the knowledge domain representing users’ interests. In this paper, the use of evolutionary algorithms is incorporated to improve the effectiveness of IRS. A context-based information retrieval system that learns individual needs from user-provided relevance feedback is developed whose retrieval effectiveness is evaluated using precision and recall metrics. The results demonstrate how to use attributes from user interaction behavior to improve the IR effectiveness.

Keywords: context, document relevance, information retrieval, personalization, user search behaviors

Procedia PDF Downloads 448
18242 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality

Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan

Abstract:

Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.

Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application

Procedia PDF Downloads 60
18241 The Effect of Cinnamaldehyde on Escherichia coli Survival during Low Temperature Long Time Cooking

Authors: Fuji Astuti, Helen Onyeaka

Abstract:

The aim of the study was to investigate the combine effects of cinnamaldehyde (0.25 and 0.45% v/v) on thermal resistance of pathogenic Escherichia coli during low temperature long time (LT-LT) cooking below 60℃. Three different static temperatures (48, 53 and 50℃) were performed, and the number of viable cells was studied. The starting concentrations of cells were 10⁸ CFU/ml. In this experiment, heat treatment efficiency for safe reduction indicated by decimal logarithm reduction of viable recovered cells, which was monitored for heating over 6 hours. Thermal inactivation was measured by means of establishing the death curves between the mean log surviving cells (log₁₀ CFU/ml) and designated time points (minutes) for each temperature test. The findings depicted that addition of cinnamaldehyde exhibited to elevate the thermal sensitivity of E. coli. However, the injured cells found to be well-adapted to all temperature tests after certain time point of cooking, in which they grew to more than 10⁵ CFU/ml.

Keywords: cinnamaldehyde, decimal logarithm reduction, Escherichia coli, LT-LT cooking

Procedia PDF Downloads 344
18240 A Cooperative Space-Time Transmission Scheme Based On Symbol Combinations

Authors: Keunhong Chae, Seokho Yoon

Abstract:

This paper proposes a cooperative Alamouti space time transmission scheme with low relay complexity for the cooperative communication systems. In the proposed scheme, the source node combines the data symbols to construct the Alamouti-coded form at the destination node, while the conventional scheme performs the corresponding operations at the relay nodes. In simulation results, it is shown that the proposed scheme achieves the second order cooperative diversity while maintaining the same bit error rate (BER) performance as that of the conventional scheme.

Keywords: Space-time transmission, cooperative communication system, MIMO.

Procedia PDF Downloads 336
18239 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: deflagration, large eddy simulation, turbulent combustion, vented enclosure

Procedia PDF Downloads 230
18238 The Behavior of Unsteady Non-Equilibrium Distribution Function and Exact Equilibrium Time for a Dilute Gas Mixture Affected by Thermal Radiation Field

Authors: Taha Zakaraia Abdel Wahid

Abstract:

In the present study, a development of the papers is introduced. The behavior of the unsteady non-equilibrium distribution functions for a rarefied gas mixture under the effect of non-linear thermal radiation field is presented. For the best of our knowledge this is done for the first time at all. The distinction and comparisons between the unsteady perturbed and the unsteady equilibrium velocity distribution functions are illustrated. The equilibrium time for the rarefied gas mixture is determined for the first time. The non-equilibrium thermodynamic properties of the system is investigated. The results are applied to the Argon-Neon binary gas mixture, for various values of both of molar fraction parameters and radiation field intensity. 3D-Graphics illustrating the calculated variables are drawn to predict their behavior and the results are discussed.

Keywords: radiation field, binary gas mixture, exact solutions, travelling wave method, unsteady BGK model, irreversible thermodynamics

Procedia PDF Downloads 431
18237 Internet Based Teleoperation of the Quad Rotor with Force Feedback Using Smith Predictor

Authors: K. Senthil Kumar, A. Vasumalaikannan

Abstract:

In this paper, teleoperation of the quadrotor using Internet with Force feedback is addressed. Teleoperation with Force feedback is the ability to remotely control a robot, where contact (obstacle) or environment (wind gust etc) information (force feedback) is communicated from the quadrotor to the master joystick and thus giving the operator a sense of telepresence. The stability and performance of such a teleoperator is highly dependent on the amount of time delay present in the control loop. This problem is further complicated given the fact that for network based communication the time delay is itself time varying and highly non deterministic. In this paper, a novel method using Neural based Smith Predictor at the master side the stability is achieved. The performance of the system even during worst case scenario is within acceptable.

Keywords: teleoperation, quadrotor, neural smith predictor, time delay

Procedia PDF Downloads 600
18236 Design of Direct Power Controller for a High Power Neutral Point Clamped Converter Using Real-Time Simulator

Authors: Amin Zabihinejad, Philippe Viarouge

Abstract:

In this paper, a direct power control (DPC) strategies have been investigated in order to control a high power AC/DC converter with time variable load. This converter is composed of a three level three phase neutral point clamped (NPC) converter as rectifier and an H-bridge four quadrant current control converter. In the high power application, controller not only must adjust the desired outputs but also decrease the level of distortions which are injected to the network from the converter. Regarding this reason and nonlinearity of the power electronic converter, the conventional controllers cannot achieve appropriate responses. In this research, the precise mathematical analysis has been employed to design the appropriate controller in order to control the time variable load. A DPC controller has been proposed and simulated using Matlab/Simulink. In order to verify the simulation result, a real-time simulator- OPAL-RT- has been employed. In this paper, the dynamic response and stability of the high power NPC with variable load has been investigated and compared with conventional types using a real-time simulator. The results proved that the DPC controller is more stable and has more precise outputs in comparison with the conventional controller.

Keywords: direct power control, three level rectifier, real time simulator, high power application

Procedia PDF Downloads 503
18235 Time Overrun in Pre-Construction Planning Phase of Construction Projects

Authors: Hafiz Usama Imad, Muhammad Akram Akhund, Tauha Hussain Ali, Ali Raza Khoso, Fida Hussain Siddiqui

Abstract:

Construction industry plays a significant role in fulfilling the major requirements of the human being. It is one of the major constituents of every developed country. Although the construction industry of both the developing and developed countries encompasses a major part of the economy, and millions of rupees are utilized every year on various kinds of construction projects. But, this industry is facing numerous hurdles in terms of its budget and timely completion. Construction projects generally consist of several phases like planning, designing, execution, and finishing. This research study aims to determine the significant factors of time overrun in pre-construction planning (PCP) phase of construction projects in Pakistan. Questionnaires were distributed by various means and responses of respondents were compiled and collected data were then analyzed through a statistical technique using SPSS version 24. Major causes of time overrun in pre-construction planning phase; which is an extremely important phase of construction projects, were revealed. The research conclusion will provide a pathway for stakeholders to pay attention to the mentioned causes to overcome the major issue of time overrun.

Keywords: construction industry, Pakistan, pre-construction planning phase, time overrun

Procedia PDF Downloads 237
18234 Hybrid Thresholding Lifting Dual Tree Complex Wavelet Transform with Wiener Filter for Quality Assurance of Medical Image

Authors: Hilal Naimi, Amelbahahouda Adamou-Mitiche, Lahcene Mitiche

Abstract:

The main problem in the area of medical imaging has been image denoising. The most defying for image denoising is to secure data carrying structures like surfaces and edges in order to achieve good visual quality. Different algorithms with different denoising performances have been proposed in previous decades. More recently, models focused on deep learning have shown a great promise to outperform all traditional approaches. However, these techniques are limited to the necessity of large sample size training and high computational costs. This research proposes a denoising approach basing on LDTCWT (Lifting Dual Tree Complex Wavelet Transform) using Hybrid Thresholding with Wiener filter to enhance the quality image. This research describes the LDTCWT as a type of lifting wavelets remodeling that produce complex coefficients by employing a dual tree of lifting wavelets filters to get its real part and imaginary part. Permits the remodel to produce approximate shift invariance, directionally selective filters and reduces the computation time (properties lacking within the classical wavelets transform). To develop this approach, a hybrid thresholding function is modeled by integrating the Wiener filter into the thresholding function.

Keywords: lifting wavelet transform, image denoising, dual tree complex wavelet transform, wavelet shrinkage, wiener filter

Procedia PDF Downloads 147
18233 Achieving Product Robustness through Variation Simulation: An Industrial Case Study

Authors: Narendra Akhadkar, Philippe Delcambre

Abstract:

In power protection and control products, assembly process variations due to the individual parts manufactured from single or multi-cavity tooling is a major problem. The dimensional and geometrical variations on the individual parts, in the form of manufacturing tolerances and assembly tolerances, are sources of clearance in the kinematic joints, polarization effect in the joints, and tolerance stack-up. All these variations adversely affect the quality of product, functionality, cost, and time-to-market. Variation simulation analysis may be used in the early product design stage to predict such uncertainties. Usually, variations exist in both manufacturing processes and materials. In the tolerance analysis, the effect of the dimensional and geometrical variations of the individual parts on the functional characteristics (conditions) of the final assembled products are studied. A functional characteristic of the product may be affected by a set of interrelated dimensions (functional parameters) that usually form a geometrical closure in a 3D chain. In power protection and control products, the prerequisite is: when a fault occurs in the electrical network, the product must respond quickly to react and break the circuit to clear the fault. Usually, the response time is in milliseconds. Any failure in clearing the fault may result in severe damage to the equipment or network, and human safety is at stake. In this article, we have investigated two important functional characteristics that are associated with the robust performance of the product. It is demonstrated that the experimental data obtained at the Schneider Electric Laboratory prove the very good prediction capabilities of the variation simulation performed using CETOL (tolerance analysis software) in an industrial context. Especially, this study allows design engineers to better understand the critical parts in the product that needs to be manufactured with good, capable tolerances. On the contrary, some parts are not critical for the functional characteristics (conditions) of the product and may lead to some reduction of the manufacturing cost, ensuring robust performance. The capable tolerancing is one of the most important aspects in product and manufacturing process design. In the case of miniature circuit breaker (MCB), the product's quality and its robustness are mainly impacted by two aspects: (1) allocation of design tolerances between the components of a mechanical assembly and (2) manufacturing tolerances in the intermediate machining steps of component fabrication.

Keywords: geometrical variation, product robustness, tolerance analysis, variation simulation

Procedia PDF Downloads 148
18232 New Concept for Real Time Selective Harmonics Elimination Based on Lagrange Interpolation Polynomials

Authors: B. Makhlouf, O. Bouchhida, M. Nibouche, K. Laidi

Abstract:

A variety of methods for selective harmonics elimination pulse width modulation have been developed, the most frequently used for real-time implementation based on look-up tables method. To address real-time requirements based in modified carrier signal is proposed in the presented work, with a general formulation to real-time harmonics control/elimination in switched inverters. Firstly, the proposed method has been demonstrated for a single value of the modulation index. However, in reality, this parameter is variable as a consequence of the voltage (amplitude) variability. In this context, a simple interpolation method for calculating the modified sine carrier signal is proposed. The method allows a continuous adjustment in both amplitude and frequency of the fundamental. To assess the performance of the proposed method, software simulations and hardware experiments have been carried out in the case of a single-phase inverter. Obtained results are very satisfactory.

Keywords: harmonic elimination, Particle Swarm Optimisation (PSO), polynomial interpolation, pulse width modulation, real-time harmonics control, voltage inverter

Procedia PDF Downloads 488
18231 Effect of Cooking Time, Seed-To-Water Ratio and Soaking Time on the Proximate Composition and Functional Properties of Tetracarpidium conophorum (Nigerian Walnut) Seeds

Authors: J. O. Idoko, C. N. Michael, T. O. Fasuan

Abstract:

This study investigated the effects of cooking time, seed-to-water ratio and soaking time on proximate and functional properties of African walnut seed using Box-Behnken design and Response Surface Methodology (BBD-RSM) with a view to increase its utilization in the food industry. African walnut seeds were sorted washed, soaked, cooked, dehulled, sliced, dried and milled. Proximate analysis and functional properties of the samples were evaluated using standard procedures. Data obtained were analyzed using descriptive and inferential statistics. Quadratic models were obtained to predict the proximate and functional qualities as a function of cooking time, seed-to-water ratio and soaking time. The results showed that the crude protein ranged between 11.80% and 23.50%, moisture content ranged between 1.00% and 4.66%, ash content ranged between 3.35% and 5.25%, crude fibre ranged from 0.10% to 7.25% and carbohydrate ranged from 1.22% to 29.35%. The functional properties showed that soluble protein ranged from 16.26% to 42.96%, viscosity ranged from 23.43 mPas to 57 mPas, emulsifying capacity ranged from 17.14% to 39.43% and water absorption capacity ranged from 232% to 297%. An increase in the volume of water used during cooking resulted in loss of water soluble protein through leaching, the length of soaking time and the moisture content of the dried product are inversely related, ash content is inversely related to the cooking time and amount of water used, extraction of fat is enhanced by increase in soaking time while increase in cooking and soaking times result into decrease in fibre content. The results obtained indicated that African walnut could be used in several food formulations as protein supplement and binder.

Keywords: African walnut, functional properties, proximate analysis, response surface methodology

Procedia PDF Downloads 376
18230 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method

Authors: Defne Uz

Abstract:

Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.

Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration

Procedia PDF Downloads 133
18229 A Comparative Study of Various Control Methods for Rendezvous of a Satellite Couple

Authors: Hasan Basaran, Emre Unal

Abstract:

Formation flying of satellites is a mission that involves a relative position keeping of different satellites in the constellation. In this study, different control algorithms are compared with one another in terms of ΔV, velocity increment, and tracking error. Various control methods, covering continuous and impulsive approaches are implemented and tested for satellites flying in low Earth orbit. Feedback linearization, sliding mode control, and model predictive control are designed and compared with an impulsive feedback law, which is based on mean orbital elements. Feedback linearization and sliding mode control approaches have identical mathematical models that include second order Earth oblateness effects. The model predictive control, on the other hand, does not include any perturbations and assumes circular chief orbit. The comparison is done with 4 different initial errors and achieved with velocity increment, root mean square error, maximum steady state error, and settling time. It was observed that impulsive law consumed the least ΔV, while produced the highest maximum error in the steady state. The continuous control laws, however, consumed higher velocity increments and produced lower amounts of tracking errors. Finally, the inversely proportional relationship between tracking error and velocity increment was established.

Keywords: chief-deputy satellites, feedback linearization, follower-leader satellites, formation flight, fuel consumption, model predictive control, rendezvous, sliding mode

Procedia PDF Downloads 86
18228 Maximum Power Point Tracking Using FLC Tuned with GA

Authors: Mohamed Amine Haraoubia, Abdelaziz Hamzaoui, Najib Essounbouli

Abstract:

The pursuit of the MPPT has led to the development of many kinds of controllers, one of which is the Fuzzy Logic Controller, which has proven its worth. To further tune this controller this paper will discuss and analyze the use of Genetic Algorithms to tune the Fuzzy Logic Controller. It will provide an introduction to both systems, and test their compatibility and performance.

Keywords: fuzzy logic controller, fuzzy logic, genetic algorithm, maximum power point, maximum power point tracking

Procedia PDF Downloads 357
18227 Controller Design for Active Suspension System of 1/4 Car with Unknown Mass and Time-Delay

Authors: Ali Al-Zughaibi

Abstract:

The purpose of this paper is to present a modeling and control of the quarter car active suspension system with unknown mass, unknown time-delay and road disturbance. The objective of designing the controller by deriving a control law to achieve stability of the system and convergence that can considerably improve the ride comfort and road disturbance handling. Thus is accomplished by using Routh-Herwitz criterion and based on some assumptions. A mathematical proof is given to show the ability of the designed controller to ensure stability and convergence of the active suspension system and dispersion oscillation of system with unknown mass, time-delay and road disturbances. Simulations were also performed for controlling quarter car suspension, where the results obtained from these simulations verify the validity of the proposed design.

Keywords: active suspension system, time-delay, disturbance rejection, dynamic uncertainty

Procedia PDF Downloads 305
18226 Effect of T6 and Re-Aging Heat Treatment on Mechanical Properties of 7055 Aluminum Alloy

Authors: M. Esmailian, M. Shakouri, A. Mottahedi, S. G. Shabestari

Abstract:

Heat treatable aluminium alloys such as 7075 and 7055, because of high strength and low density, are used widely in aircraft industry. For best mechanical properties, T6 heat treatment has recommended for this regards, but this temper treatment is sensitive to corrosion induced and Stress Corrosion Cracking (SCC) damage. For improving this property, the over-aging treatment (T7) applies to this alloy, but it decreases the mechanical properties up to 30 percent. Hence, to increase the mechanical properties, without any remarkable decrease in SCC resistant, Retrogression and Re-Aging (RRA) heat treatment is used. This treatment performs in a relatively short time. In this paper, the RRA heat treatment was applied to 7055 aluminum alloy and then effect of RRA time on the mechanical properties of 7055 has been investigated. The results show that the 40 minute time is suitable time for retrogression of 7055 aluminum alloy and ultimate strength increases up to 625MPa.

Keywords: 7055 Aluminum alloy, mechanical properties, SCC resistance, heat Treatment

Procedia PDF Downloads 415
18225 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 378
18224 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 150
18223 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 138
18222 Hourly Solar Radiations Predictions for Anticipatory Control of Electrically Heated Floor: Use of Online Weather Conditions Forecast

Authors: Helene Thieblemont, Fariborz Haghighat

Abstract:

Energy storage systems play a crucial role in decreasing building energy consumption during peak periods and expand the use of renewable energies in buildings. To provide a high building thermal performance, the energy storage system has to be properly controlled to insure a good energy performance while maintaining a satisfactory thermal comfort for building’s occupant. In the case of passive discharge storages, defining in advance the required amount of energy is required to avoid overheating in the building. Consequently, anticipatory supervisory control strategies have been developed forecasting future energy demand and production to coordinate systems. Anticipatory supervisory control strategies are based on some predictions, mainly of the weather forecast. However, if the forecasted hourly outdoor temperature may be found online with a high accuracy, solar radiations predictions are most of the time not available online. To estimate them, this paper proposes an advanced approach based on the forecast of weather conditions. Several methods to correlate hourly weather conditions forecast to real hourly solar radiations are compared. Results show that using weather conditions forecast allows estimating with an acceptable accuracy solar radiations of the next day. Moreover, this technique allows obtaining hourly data that may be used for building models. As a result, this solar radiation prediction model may help to implement model-based controller as Model Predictive Control.

Keywords: anticipatory control, model predictive control, solar radiation forecast, thermal storage

Procedia PDF Downloads 257
18221 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction

Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter

Abstract:

Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.

Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA

Procedia PDF Downloads 155
18220 Batch-Oriented Setting Time`s Optimisation in an Aerodynamic Feeding System

Authors: Jan Busch, Maurice Schmidt, Peter Nyhuis

Abstract:

The change of conditions for production companies in high-wage countries is characterized by the globalization of competition and the transition of a supplier´s to a buyer´s market. The companies need to face the challenges of reacting flexibly to these changes. Due to the significant and increasing degree of automation, assembly has become the most expensive production process. Regarding the reduction of production cost, assembly consequently offers a considerable rationalizing potential. Therefore, an aerodynamic feeding system has been developed at the Institute of Production Systems and Logistics (IFA), Leibniz Universitaet Hannover. In former research activities, this system has been enabled to adjust itself using genetic algorithm. The longer the genetic algorithm is executed the better is the feeding quality. In this paper, the relation between the system´s setting time and the feeding quality is observed and a function which enables the user to achieve the minimum of the total feeding time is presented.

Keywords: aerodynamic feeding system, batch size, optimisation, setting time

Procedia PDF Downloads 245
18219 A Study on the Relationship Between Adult Videogaming and Wellbeing, Health, and Labor Supply

Authors: William Marquis, Fang Dong

Abstract:

There has been a growing concern in recent years over the economic and social effects of adult video gaming. It has been estimated that the number of people who played video games during the COVID-19 pandemic is close to three billion, and there is evidence that this form of entertainment is here to stay. Many people are concerned that this growing use of time could crowd out time that could be spent on alternative forms of entertainment with family, friends, sports, and other social activities that build community. For example, recent studies of children suggest that playing videogames crowds out time that could be spent on homework, watching TV, or in other social activities. Similar studies of adults have shown that video gaming is negatively associated with earnings, time spent at work, and socializing with others. The primary objective of this paper is to examine how time adults spend on video gaming could displace time they could spend working and on activities that enhance their health and well-being. We use data from the American Time Use Survey (ATUS), maintained by the Bureau of Labor Statistics, to analyze the effects of time-use decisions on three measures of well-being. We pool the ATUS Well-being Module for multiple years, 2010, 2012, 2013, and 2021, along with the ATUS Activity and Who files for these years. This pooled data set provides three broad measures of well-being, e.g., health, life satisfaction, and emotional well-being. Seven variants of each are used as a dependent variable in different multivariate regressions. We add to the existing literature in the following ways. First, we investigate whether the time adults spend in video gaming crowds out time spent working or in social activities that promote health and life satisfaction. Second, we investigate the relationship between adult gaming and their emotional well-being, also known as negative or positive affect, a factor that is related to depression, health, and labor market productivity. The results of this study suggest that the time adult gamers spend on video gaming has no effect on their supply of labor, a negligible effect on their time spent socializing and studying, and mixed effects on their emotional well-being, such as increasing feelings of pain and reducing feelings of happiness and stress.

Keywords: online gaming, health, social capital, emotional wellbeing

Procedia PDF Downloads 26
18218 Water Droplet Impact on Vibrating Rigid Superhydrophobic Surfaces

Authors: Jingcheng Ma, Patricia B. Weisensee, Young H. Shin, Yujin Chang, Junjiao Tian, William P. King, Nenad Miljkovic

Abstract:

Water droplet impact on surfaces is a ubiquitous phenomenon in both nature and industry. The transfer of mass, momentum and energy can be influenced by the time of contact between droplet and surface. In order to reduce the contact time, we study the influence of substrate motion prior to impact on the dynamics of droplet recoil. Using optical high speed imaging, we investigated the impact dynamics of macroscopic water droplets (~ 2mm) on rigid nanostructured superhydrophobic surfaces vibrating at 60 – 300 Hz and amplitudes of 0 – 3 mm. In addition, we studied the influence of the phase of the substrate at the moment of impact on total contact time. We demonstrate that substrate vibration can alter droplet dynamics, and decrease total contact time by as much as 50% compared to impact on stationary rigid superhydrophobic surfaces. Impact analysis revealed that the vibration frequency mainly affected the maximum contact time, while the amplitude of vibration had little direct effect on the contact time. Through mathematical modeling, we show that the oscillation amplitude influences the possibility density function of droplet impact at a given phase, and thus indirectly influences the average contact time. We also observed more vigorous droplet splashing and breakup during impact at larger amplitudes. Through semi-empirical mathematical modeling, we describe the relationship between contact time and vibration frequency, phase, and amplitude of the substrate. We also show that the maximum acceleration during the impact process is better suited as a threshold parameter for the onset of splashing than a Weber-number criterion. This study not only provides new insights into droplet impact physics on vibrating surfaces, but develops guidelines for the rational design of surfaces to achieve controllable droplet wetting in applications utilizing vibration.

Keywords: contact time, impact dynamics, oscillation, pear-shape droplet

Procedia PDF Downloads 442
18217 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI

Procedia PDF Downloads 161
18216 Gender and Advertisements: A Content Analysis of Pakistani Prime Time Advertisements

Authors: Aaminah Hassan

Abstract:

Advertisements carry a great potential to influence our lives because they are crafted to meet particular ends. Stereotypical representation in advertisements is capable of forming unconscious attitudes among people towards any gender and their abilities. This study focuses on gender representation in Pakistani prime time advertisements. For this purpose, 13 advertisements were selected from three different categories of foods and beverages, cosmetics, cell phones and cellular networks from the prime time slots of one of the leading Pakistani entertainment channel, ‘Urdu 1’. Both quantitative and qualitative analyses are carried out for range of variables like gender, age, roles, activities, setting, appearance and voice overs. The results revealed that gender representation in advertisements is stereotypical. Moreover, in few instances, the portrayal of women is not only culturally inappropriate but is demeaning to the image of women as well. Their bodily charm is used to promote products. Comparing different entertainment channels for their prime time advertisements and broadening the scope of this research will yield greater implications for the researchers who want to carry out the similar research. It is hoped that the current study would help in the promotion of media literacy among the viewers and media authorities in Pakistan.

Keywords: Advertisements, Content Analysis, Gender, Prime time

Procedia PDF Downloads 199
18215 Experimental and Theoratical Methods to Increase Core Damping for Sandwitch Cantilever Beam

Authors: Iyd Eqqab Maree, Moouyad Ibrahim Abbood

Abstract:

The purpose behind this study is to predict damping effect for steel cantilever beam by using two methods of passive viscoelastic constrained layer damping. First method is Matlab Program, this method depend on the Ross, Kerwin and Unger (RKU) model for passive viscoelastic damping. Second method is experimental lab (frequency domain method), in this method used the half-power bandwidth method and can be used to determine the system loss factors for damped steel cantilever beam. The RKU method has been applied to a cantilever beam because beam is a major part of a structure and this prediction may further leads to utilize for different kinds of structural application according to design requirements in many industries. In this method of damping a simple cantilever beam is treated by making sandwich structure to make the beam damp, and this is usually done by using viscoelastic material as a core to ensure the damping effect. The use of viscoelastic layers constrained between elastic layers is known to be effective for damping of flexural vibrations of structures over a wide range of frequencies. The energy dissipated in these arrangements is due to shear deformation in the viscoelastic layers, which occurs due to flexural vibration of the structures. The theory of dynamic stability of elastic systems deals with the study of vibrations induced by pulsating loads that are parametric with respect to certain forms of deformation. There is a very good agreement of the experimental results with the theoretical findings. The main ideas of this thesis are to find the transition region for damped steel cantilever beam (4mm and 8mm thickness) from experimental lab and theoretical prediction (Matlab R2011a). Experimentally and theoretically proved that the transition region for two specimens occurs at modal frequency between mode 1 and mode 2, which give the best damping, maximum loss factor and maximum damping ratio, thus this type of viscoelastic material core (3M468) is very appropriate to use in automotive industry and in any mechanical application has modal frequency eventuate between mode 1 and mode 2.

Keywords: 3M-468 material core, loss factor and frequency, domain method, bioinformatics, biomedicine, MATLAB

Procedia PDF Downloads 257
18214 Structural Strength Evaluation and Wear Prediction of Double Helix Steel Wire Ropes for Heavy Machinery

Authors: Krunal Thakar

Abstract:

Wire ropes combine high tensile strength and flexibility as compared to other general steel products. They are used in various application areas such as cranes, mining, elevators, bridges, cable cars, etc. The earliest reported use of wire ropes was for mining hoist application in 1830s. Over the period, there have been substantial advancement in the design of wire ropes for various application areas. Under operational conditions, wire ropes are subjected to varying tensile loads and bending loads resulting in material wear and eventual structural failure due to fretting fatigue. The conventional inspection methods to determine wire failure is only limited to outer wires of rope. However, till date, there is no effective mathematical model to examine the inter wire contact forces and wear characteristics. The scope of this paper is to present a computational simulation technique to evaluate inter wire contact forces and wear, which are in many cases responsible for rope failure. Two different type of ropes, IWRC-6xFi(29) and U3xSeS(48) were taken for structural strength evaluation and wear prediction. Both ropes have a double helix twisted wire profile as per JIS standards and are mainly used in cranes. CAD models of both ropes were developed in general purpose design software using in house developed formulation to generate double helix profile. Numerical simulation was done under two different load cases (a) Axial Tension and (b) Bending over Sheave. Different parameters such as stresses, contact forces, wear depth, load-elongation, etc., were investigated and compared between both ropes. Numerical simulation method facilitates the detailed investigation of inter wire contact and wear characteristics. In addition, various selection parameters like sheave diameter, rope diameter, helix angle, swaging, maximum load carrying capacity, etc., can be quickly analyzed.

Keywords: steel wire ropes, numerical simulation, material wear, structural strength, axial tension, bending over sheave

Procedia PDF Downloads 141