Search results for: time prediction algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20397

Search results for: time prediction algorithms

18657 Study of the Persian Gulf’s and Oman Sea’s Numerical Tidal Currents

Authors: Fatemeh Sadat Sharifi

Abstract:

In this research, a barotropic model was employed to consider the tidal studies in the Persian Gulf and Oman Sea, where the only sufficient force was the tidal force. To do that, a finite-difference, free-surface model called Regional Ocean Modeling System (ROMS), was employed on the data over the Persian Gulf and Oman Sea. To analyze flow patterns of the region, the results of limited size model of The Finite Volume Community Ocean Model (FVCOM) were appropriated. The two points were determined since both are one of the most critical water body in case of the economy, biology, fishery, Shipping, navigation, and petroleum extraction. The OSU Tidal Prediction Software (OTPS) tide and observation data validated the modeled result. Next, tidal elevation and speed, and tidal analysis were interpreted. Preliminary results determine a significant accuracy in the tidal height compared with observation and OTPS data, declaring that tidal currents are highest in Hormuz Strait and the narrow and shallow region between Iranian coasts and Islands. Furthermore, tidal analysis clarifies that the M_2 component has the most significant value. Finally, the Persian Gulf tidal currents are divided into two branches: the first branch converts from south to Qatar and via United Arab Emirate rotates to Hormuz Strait. The secondary branch, in north and west, extends up to the highest point in the Persian Gulf and in the head of Gulf turns counterclockwise.

Keywords: numerical model, barotropic tide, tidal currents, OSU tidal prediction software, OTPS

Procedia PDF Downloads 110
18656 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 89
18655 Implementation of CNV-CH Algorithm Using Map-Reduce Approach

Authors: Aishik Deb, Rituparna Sinha

Abstract:

We have developed an algorithm to detect the abnormal segment/"structural variation in the genome across a number of samples. We have worked on simulated as well as real data from the BAM Files and have designed a segmentation algorithm where abnormal segments are detected. This algorithm aims to improve the accuracy and performance of the existing CNV-CH algorithm. The next-generation sequencing (NGS) approach is very fast and can generate large sequences in a reasonable time. So the huge volume of sequence information gives rise to the need for Big Data and parallel approaches of segmentation. Therefore, we have designed a map-reduce approach for the existing CNV-CH algorithm where a large amount of sequence data can be segmented and structural variations in the human genome can be detected. We have compared the efficiency of the traditional and map-reduce algorithms with respect to precision, sensitivity, and F-Score. The advantages of using our algorithm are that it is fast and has better accuracy. This algorithm can be applied to detect structural variations within a genome, which in turn can be used to detect various genetic disorders such as cancer, etc. The defects may be caused by new mutations or changes to the DNA and generally result in abnormally high or low base coverage and quantification values.

Keywords: cancer detection, convex hull segmentation, map reduce, next generation sequencing

Procedia PDF Downloads 115
18654 Task Evoked Pupillary Response for Surgical Task Difficulty Prediction via Multitask Learning

Authors: Beilei Xu, Wencheng Wu, Lei Lin, Rachel Melnyk, Ahmed Ghazi

Abstract:

In operating rooms, excessive cognitive stress can impede the performance of a surgeon, while low engagement can lead to unavoidable mistakes due to complacency. As a consequence, there is a strong desire in the surgical community to be able to monitor and quantify the cognitive stress of a surgeon while performing surgical procedures. Quantitative cognitiveload-based feedback can also provide valuable insights during surgical training to optimize training efficiency and effectiveness. Various physiological measures have been evaluated for quantifying cognitive stress for different mental challenges. In this paper, we present a study using the cognitive stress measured by the task evoked pupillary response extracted from the time series eye-tracking measurements to predict task difficulties in a virtual reality based robotic surgery training environment. In particular, we proposed a differential-task-difficulty scale, utilized a comprehensive feature extraction approach, and implemented a multitask learning framework and compared the regression accuracy between the conventional single-task-based and three multitask approaches across subjects.

Keywords: surgical metric, task evoked pupillary response, multitask learning, TSFresh

Procedia PDF Downloads 122
18653 Tabu Search Algorithm for Ship Routing and Scheduling Problem with Time Window

Authors: Khaled Moh. Alhamad

Abstract:

This paper describes a tabu search heuristic for a ship routing and scheduling problem (SRSP). The method was developed to address the problem of loading cargos for many customers using heterogeneous vessels. Constraints relate to delivery time windows imposed by customers, the time horizon by which all deliveries must be made and vessel capacities. The results of a computational investigation are presented. Solution quality and execution time are explored with respect to problem size and parameters controlling the tabu search such as tenure and neighbourhood size.

Keywords: heuristic, scheduling, tabu search, transportation

Procedia PDF Downloads 490
18652 A Study of Behavioral Phenomena Using an Artificial Neural Network

Authors: Yudhajit Datta

Abstract:

Will is a phenomenon that has puzzled humanity for a long time. It is a belief that Will Power of an individual affects the success achieved by an individual in life. It is thought that a person endowed with great will power can overcome even the most crippling setbacks of life while a person with a weak will cannot make the most of life even the greatest assets. Behavioral aspects of the human experience such as will are rarely subjected to quantitative study owing to the numerous uncontrollable parameters involved. This work is an attempt to subject the phenomena of will to the test of an artificial neural network. The claim being tested is that will power of an individual largely determines success achieved in life. In the study, an attempt is made to incorporate the behavioral phenomenon of will into a computational model using data pertaining to the success of individuals obtained from an experiment. A neural network is to be trained using data based upon part of the model, and subsequently used to make predictions regarding will corresponding to data points of success. If the prediction is in agreement with the model values, the model is to be retained as a candidate. Ultimately, the best-fit model from among the many different candidates is to be selected, and used for studying the correlation between success and will.

Keywords: will power, will, success, apathy factor, random factor, characteristic function, life story

Procedia PDF Downloads 365
18651 A Novel RLS Based Adaptive Filtering Method for Speech Enhancement

Authors: Pogula Rakesh, T. Kishore Kumar

Abstract:

Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids, and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB, and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR), and SNR loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals.

Keywords: adaptive filter, adaptive noise canceller, mean squared error, noise reduction, NLMS, RLS, SNR, SNR loss

Procedia PDF Downloads 458
18650 The Conceptual Design Model of an Automated Supermarket

Authors: V. Sathya Narayanan, P. Sidharth, V. R. Sanal Kumar

Abstract:

The success of any retail business is predisposed by its swift response and its knack in understanding the constraints and the requirements of customers. In this paper a conceptual design model of an automated customer-friendly supermarket has been proposed. In this model a 10-sided, space benefited, regular polygon shaped gravity shelves have been designed for goods storage and effective customer-specific algorithms have been built-in for quick automatic delivery of the randomly listed goods. The algorithm is developed with two main objectives, viz., delivery time and priority. For meeting these objectives the randomly listed items are reorganized according to the critical-path of the robotic arm specific to the identified shop and its layout and the items are categorized according to the demand, shape, size, similarity and nature of the product for an efficient pick-up, packing and delivery process. We conjectured that the proposed automated supermarket model reduces business operating costs with much customer satisfaction warranting a win-win situation.

Keywords: automated supermarket, electronic shopping, polygon-shaped rack, shortest path algorithm for shopping

Procedia PDF Downloads 383
18649 Kinetics of Sugar Losses in Hot Water Blanching of Water Yam (Dioscorea alata)

Authors: Ayobami Solomon Popoola

Abstract:

Yam is majorly a carbohydrate food grown in most parts of the world. It could be boiled, fried or roasted for consumption in a variety of ways. Blanching is an established heat pre-treatment given to fruits and vegetables prior to further processing such as dehydration, canning, freezing etc. Losses of soluble solids during blanching has been a great problem because a reasonable quantity of the water-soluble nutrients are inevitably leached into the blanching water. Without blanching, the high residual levels of reducing sugars after extended storage produce a dark, bitter-tasting product because of the Maillard reactions of reducing sugars at frying temperature. Measurement and prediction of such losses are necessary for economic efficiency in production and to establish the level of effluent treatment of the blanching water. This paper aims at resolving this problem by investigating the effects of cube size and temperature on the rate of diffusional losses of reducing sugars and total sugars during hot water blanching of water-yam. The study was carried out using four temperature levels (65, 70, 80 and 90 °C) and two cubes sizes (0.02 m³ and 0.03 m³) at 4 times intervals (5, 10, 15 and 20 mins) respectively. Obtained data were fitted into Fick’s non-steady equation from which diffusion coefficients (Da) were obtained. The Da values were subsequently fitted into Arrhenius plot to obtain activation energies (Ea-values) for diffusional losses. The diffusion co-efficient were independent of cube size and time but highly temperature dependent. The diffusion coefficients were ≥ 1.0 ×10⁻⁹ m²s⁻¹ for reducing sugars and ≥ 5.0 × 10⁻⁹ m²s⁻¹ for total sugars. The Ea values ranged between 68.2 to 73.9 KJmol⁻¹ and 7.2 to 14.30 KJmol⁻¹ for reducing sugars and total sugars losses respectively. Predictive equations for estimating amount of reducing sugars and total sugars with blanching time of water-yam at various temperatures were also presented. The equation could be valuable in process design and optimization. However, amount of other soluble solids that might have leached into the water along with reducing and total sugars during blanching was not investigated in the study.

Keywords: blanching, kinetics, sugar losses, water yam

Procedia PDF Downloads 146
18648 Predicting Personality and Psychological Distress Using Natural Language Processing

Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi

Abstract:

Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).

Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality

Procedia PDF Downloads 63
18647 Optimization Analysis of Controlled Cooling Process for H-Shape Steam Beams

Authors: Jiin-Yuh Jang, Yu-Feng Gan

Abstract:

In order to improve the comprehensive mechanical properties of the steel, the cooling rate, and the temperature distribution must be controlled in the cooling process. A three-dimensional numerical model for the prediction of the heat transfer coefficient distribution of H-beam in the controlled cooling process was performed in order to obtain the uniform temperature distribution and minimize the maximum stress and the maximum deformation after the controlled cooling. An algorithm developed with a simplified conjugated-gradient method was used as an optimizer to optimize the heat transfer coefficient distribution. The numerical results showed that, for the case of air cooling 5 seconds followed by water cooling 6 seconds with uniform the heat transfer coefficient, the cooling rate is 15.5 (℃/s), the maximum temperature difference is 85℃, the maximum the stress is 125 MPa, and the maximum deformation is 1.280 mm. After optimize the heat transfer coefficient distribution in control cooling process with the same cooling time, the cooling rate is increased to 20.5 (℃/s), the maximum temperature difference is decreased to 52℃, the maximum stress is decreased to 82MPa and the maximum deformation is decreased to 1.167mm.

Keywords: controlled cooling, H-Beam, optimization, thermal stress

Procedia PDF Downloads 349
18646 The Application of Bayesian Heuristic for Scheduling in Real-Time Private Clouds

Authors: Sahar Sohrabi

Abstract:

The emergence of Cloud data centers has revolutionized the IT industry. Private Clouds in specific provide Cloud services for certain group of customers/businesses. In a real-time private Cloud each task that is given to the system has a deadline that desirably should not be violated. Scheduling tasks in a real-time private CLoud determine the way available resources in the system are shared among incoming tasks. The aim of the scheduling policy is to optimize the system outcome which for a real-time private Cloud can include: energy consumption, deadline violation, execution time and the number of host switches. Different scheduling policies can be used for scheduling. Each lead to a sub-optimal outcome in a certain settings of the system. A Bayesian Scheduling strategy is proposed for scheduling to further improve the system outcome. The Bayesian strategy showed to outperform all selected policies. It also has the flexibility in dealing with complex pattern of incoming task and has the ability to adapt.

Keywords: cloud computing, scheduling, real-time private cloud, bayesian

Procedia PDF Downloads 340
18645 Bright Light Effects on the Concentration and Diffuse Attention Reaction Time, Tension, Angry, Fatigue and Alertness among Shift Workers

Authors: Mohammad Imani, JabraeilNasl Seraji, Abolfazl Zakerian

Abstract:

Background: Reaction time is the amount of time it takes to respond to a stimulus. In fact The time that passes between the introduction of a stimulus and the reaction by the subject to that stimulus. The aim of this interventional study is evaluation of bright light effects on concentration and diffuse attention reaction time, tension, angry, fatigue and alertness among shift workers. There are several incentives that can reduce the reaction time or added. Bright light as one of the environmental factors can reduce reaction time. Material &Method: This cross-sectional descriptive study was conducted in 1391, in 88 subjects (44 Fixed morning worker and 44 shift worker ) In a 24 h time (13-16-19-22-1-4-7-10) in an ordinary light situation after a randomly selected sample size calculation, concentration and diffuse attention test (reaction time) has been done. After intervention and using of bright light (4500lux), again reaction time test was done. After analyzing by ElISA method obtained data were analyzed by statistical software SPSS 19 and using T-test and ANOVA statistical analysis. Results: Between average of reaction time tests in ordinary light exposed to fixed morning workers and bright light exposed to shift worker, with 95% CI, (P>%5) there was no significant relationship. After the intervention and the use of bright light (4500 lux),between average of concentration and diffused attention reaction time tests in ordinary light exposure on the fixed morning workers and bright light exposure shift workers with 95% CI, (P<5%) there was significant relationship. Conclusion: In sometimes of 24 h during ordinary light exposure concentration and diffused attention reaction time has changed in shift workers. After intervention, during bright light (4500lux) exposure as a light shower, focused and diffuse attention reaction time, tension ,angry and fatigue decreased.

Keywords: bright light, reaction time, tension, angry, fatigue, alertness

Procedia PDF Downloads 355
18644 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: consensus, curse of correlation, imbalance classification, rank-based chain-mode ensemble

Procedia PDF Downloads 115
18643 Q-Learning-Based Path Planning Approach for Unmanned Aerial Vehicles in a Dynamic Environment

Authors: Raja Jarray, Imen Zaghbani, Soufiene Bouallègue

Abstract:

Path planning for Unmanned Aerial Vehicles (UAVs) in dynamic environments poses a significant challenge. Adapting planning algorithms to these complex environments with moving obstacles is a major task in real-world robotics. This article introduces a path-planning strategy based on a Q-learning algorithm, which enables an effective response to avoid moving obstacles while ensuring mission feasibility. A dynamic reward function is introduced, causing the UAV to use the real-time distance between its current position and the destination as training data. The objective of the proposed Q-learning-based path planning algorithm is to guide the drone through an optimal flight itinerary in a dynamic, collision-free environment. The proposed Q-learning-based UAV planner is evaluated considering numerous commonly used performance metrics. Demonstrative results are provided and discussed to show the effectiveness and practicability of such an artificial intelligence-based path planning approach.

Keywords: unmanned aerial vehicles, dynamic path planning, moving obstacles, reinforcement-learning, Q-learning

Procedia PDF Downloads 22
18642 Generating Real-Time Visual Summaries from Located Sensor-Based Data with Chorems

Authors: Z. Bouattou, R. Laurini, H. Belbachir

Abstract:

This paper describes a new approach for the automatic generation of the visual summaries dealing with cartographic visualization methods and sensors real time data modeling. Hence, the concept of chorems seems an interesting candidate to visualize real time geographic database summaries. Chorems have been defined by Roger Brunet (1980) as schematized visual representations of territories. However, the time information is not yet handled in existing chorematic map approaches, issue has been discussed in this paper. Our approach is based on spatial analysis by interpolating the values recorded at the same time, by sensors available, so we have a number of distributed observations on study areas and used spatial interpolation methods to find the concentration fields, from these fields and by using some spatial data mining procedures on the fly, it is possible to extract important patterns as geographic rules. Then, those patterns are visualized as chorems.

Keywords: geovisualization, spatial analytics, real-time, geographic data streams, sensors, chorems

Procedia PDF Downloads 382
18641 Bias Prevention in Automated Diagnosis of Melanoma: Augmentation of a Convolutional Neural Network Classifier

Authors: Kemka Ihemelandu, Chukwuemeka Ihemelandu

Abstract:

Melanoma remains a public health crisis, with incidence rates increasing rapidly in the past decades. Improving diagnostic accuracy to decrease misdiagnosis using Artificial intelligence (AI) continues to be documented. Unfortunately, unintended racially biased outcomes, a product of lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone, have increasingly been recognized as a problem.Resulting in noted limitations of the accuracy of the Convolutional neural network (CNN)models. CNN models are prone to biased output due to biases in the dataset used to train them. Our aim in this study was the optimization of convolutional neural network algorithms to mitigate bias in the automated diagnosis of melanoma. We hypothesized that our proposed training algorithms based on a data augmentation method to optimize the diagnostic accuracy of a CNN classifier by generating new training samples from the original ones will reduce bias in the automated diagnosis of melanoma. We applied geometric transformation, including; rotations, translations, scale change, flipping, and shearing. Resulting in a CNN model that provided a modifiedinput data making for a model that could learn subtle racial features. Optimal selection of the momentum and batch hyperparameter increased our model accuracy. We show that our augmented model reduces bias while maintaining accuracy in the automated diagnosis of melanoma.

Keywords: bias, augmentation, melanoma, convolutional neural network

Procedia PDF Downloads 188
18640 Comparison of Applicability of Time Series Forecasting Models VAR, ARCH and ARMA in Management Science: Study Based on Empirical Analysis of Time Series Techniques

Authors: Muhammad Tariq, Hammad Tahir, Fawwad Mahmood Butt

Abstract:

Purpose: This study attempts to examine the best forecasting methodologies in the time series. The time series forecasting models such as VAR, ARCH and the ARMA are considered for the analysis. Methodology: The Bench Marks or the parameters such as Adjusted R square, F-stats, Durban Watson, and Direction of the roots have been critically and empirically analyzed. The empirical analysis consists of time series data of Consumer Price Index and Closing Stock Price. Findings: The results show that the VAR model performed better in comparison to other models. Both the reliability and significance of VAR model is highly appreciable. In contrary to it, the ARCH model showed very poor results for forecasting. However, the results of ARMA model appeared double standards i.e. the AR roots showed that model is stationary and that of MA roots showed that the model is invertible. Therefore, the forecasting would remain doubtful if it made on the bases of ARMA model. It has been concluded that VAR model provides best forecasting results. Practical Implications: This paper provides empirical evidences for the application of time series forecasting model. This paper therefore provides the base for the application of best time series forecasting model.

Keywords: forecasting, time series, auto regression, ARCH, ARMA

Procedia PDF Downloads 321
18639 The Study on Corpse Floating Time in Shanghai Region of China

Authors: Hang Meng, Wen-Bin Liu, Bi Xiao, Kai-Jun Ma, Jian-Hui Xie, Geng Fei, Tian-Ye Zhang, Lu-Yi Xu, Dong-Chuan Zhang

Abstract:

The victims in water are often found in the coastal region, along river region or the region with lakes. In China, the examination for the bodies of victims in the water is conducted by forensic doctors working in the public security bureau. Because the enter water time for most of the victims are not clear, and often lack of monitor images and other information, so to find out the corpse enter water time for victims is very difficult. After the corpse of the victim enters the water, it sinks first, then corruption gas produces, which can make the density of the corpse to be less than water, and thus rise again. So the factor that determines the corpse floating time is temperature. On the basis of the temperature data obtained in Shanghai region of China (Shanghai is a north subtropical marine monsoon climate, with an average annual temperature of about 17.1℃. The hottest month is July, the average monthly temperature is 28.6℃, and the coldest month is January, the average monthly temperature is 4.8℃). This study selected about 100 cases with definite corpse enter water time and corpse floating time, analyzed the cases and obtained the empirical law of the corpse floating time. For example, in the Shanghai region, on June 15th and October 15th, the corpse floating time is about 1.5 days. In early December, the bodies who entered the water will go up around January 1st of the following year, and the bodies who enter water in late December will float in March of next year. The results of this study can be used to roughly estimate the water enter time of the victims in Shanghai. Forensic doctors around the world can also draw on the results of this study to infer the time when the corpses of the victims in the water go up.

Keywords: corpse enter water time, corpse floating time, drowning, forensic pathology, victims in the water

Procedia PDF Downloads 176
18638 Basic Calibration and Normalization Techniques for Time Domain Reflectometry Measurements

Authors: Shagufta Tabassum

Abstract:

The study of dielectric properties in a binary mixture of liquids is very useful to understand the liquid structure, molecular interaction, dynamics, and kinematics of the mixture. Time-domain reflectometry (TDR) is a powerful tool for studying the cooperation and molecular dynamics of the H-bonded system. In this paper, we discuss the basic calibration and normalization procedure for time-domain reflectometry measurements. Our approach is to explain the different types of error occur during TDR measurements and how these errors can be eliminated or minimized.

Keywords: time domain reflectometry measurement techinque, cable and connector loss, oscilloscope loss, and normalization technique

Procedia PDF Downloads 184
18637 An excessive Screen Time of High School Students in Their Free Time Promotes Our Young People’s Risk of Obesity

Authors: Susana Aldaba Yaben, Marga Echauri Ozcoidi, Rosario Osinaga Cenoz

Abstract:

It was decided to make a diagnosis with students of Berriozar High School between 12 and 15 years (both included) for their lifestyles in relation to eating habits, BMI (Body Mass Index), physical activity, drugs, interpersonal relationships and screen time. The aim of this survey is identifying needs of this population and depending on the results, we could program socio-educational activities. This action is part of the Community Health Promotion Programme and healthy lifestyles in childhood and youth of Berriozar. The eating habits, a lack of physical activity and an excessive screen time are causes of 26,75% of obese or overweight young people. First of all, many of them have got a diet enriched in saturated fats and sugars. Secondly, most of them do not practise physical exercise daily and finally, their screen time are higher than the recommendation (until 2 hours a day).

Keywords: lifestyle, diet, BMI, physical activity, screen time, education, youth

Procedia PDF Downloads 555
18636 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem

Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly

Abstract:

We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.

Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard

Procedia PDF Downloads 503
18635 Rice Blessing Ceremony of Thailand and Vietnam: The Relation of Southeast Asia

Authors: Patthida Bunchavalit, Saharot Kittimahacharoen

Abstract:

The objective of this article is to compare rice blessing ceremony between Thailand and Vietnam. Both countries are located in Southeast Asia where agriculture is the main occupation. As a result of the study, it is found that the rice blessing ceremony of Thai and Vietnamese societies have differences and similarities. A person leading the ceremony is a person who has the highest position in the country. For Thailand, it is the king or royal family member while for Vietnam, it is the president. In Thailand, the ceremony began in Ayutthaya period which derived from Buddhism and Brahmanism ideology. It is annually organized in the beginning of raining season. In Vietnam, it is annually organized in the beginning of spring. The first time it occurred was in Tien Le Monarchy period of Thien Phuc era deriving from Chinese ideology. The differences are ideas, believes, objectives and details of the ceremony. It is, in Thailand, to boost farmer’s morale and to predict the fertility of crops in each year. Additionally, there is a prediction using royal cows. Meanwhile, in Vietnam the purpose is to worship god of weather for seasonal rain and productive harvesting. Therefore, it is presumed that the rice blessing ceremony of Thailand and Vietnam somewhat have similarities in spite of having different origin but are on the same basis of belief.

Keywords: agriculture, ceremony, culture, Thailand, Vietnam

Procedia PDF Downloads 163
18634 A New Learning Automata-Based Algorithm to the Priority-Based Target Coverage Problem in Directional Sensor Networks

Authors: Shaharuddin Salleh, Sara Marouf, Hosein Mohammadi

Abstract:

Directional sensor networks (DSNs) have recently attracted a great deal of attention due to their extensive applications in a wide range of situations. One of the most important problems associated with DSNs is covering a set of targets in a given area and, at the same time, maximizing the network lifetime. This is due to limitation in sensing angle and battery power of the directional sensors. This problem gets more complicated by the possibility that targets may have different coverage requirements. In the present study, this problem is referred to as priority-based target coverage (PTC). As sensors are often densely deployed, organizing the sensors into several cover sets and then activating these cover sets successively is a promising solution to this problem. In this paper, we propose a learning automata-based algorithm to organize the directional sensors into several cover sets in such a way that each cover set could satisfy coverage requirements of all the targets. Several experiments are conducted to evaluate the performance of the proposed algorithm. The results demonstrated that the algorithms were able to contribute to solving the problem.

Keywords: directional sensor networks, target coverage problem, cover set formation, learning automata

Procedia PDF Downloads 390
18633 Development of a Practical Screening Measure for the Prediction of Low Birth Weight and Neonatal Mortality in Upper Egypt

Authors: Prof. Ammal Mokhtar Metwally, Samia M. Sami, Nihad A. Ibrahim, Fatma A. Shaaban, Iman I. Salama

Abstract:

Objectives: Reducing neonatal mortality by 2030 is still a challenging goal in developing countries. low birth weight (LBW) is a significant contributor to this, especially where weighing newborns is not possible routinely. The present study aimed to determine a simple, easy, reliable anthropometric measure(s) that can predict LBW) and neonatal mortality. Methods: A prospective cohort study of 570 babies born in districts of El Menia governorate, Egypt (where most deliveries occurred at home) was examined at birth. Newborn weight, length, head, chest, mid-arm, and thigh circumferences were measured. Follow up of the examined neonates took place during their first four weeks of life to report any mortalities. The most predictable anthropometric measures were determined using the statistical package of SPSS, and multiple Logistic regression analysis was performed.: Results: Head and chest circumferences with cut-off points < 33 cm and ≤ 31.5 cm, respectively, were the significant predictors for LBW. They carried the best combination of having the highest sensitivity (89.8 % & 86.4 %) and least false negative predictive value (1.4 % & 1.7 %). Chest circumference with a cut-off point ≤ 31.5 cm was the significant predictor for neonatal mortality with 83.3 % sensitivity and 0.43 % false negative predictive value. Conclusion: Using chest circumference with a cut-off point ≤ 31.5 cm is recommended as a single simple anthropometric measurement for the prediction of both LBW and neonatal mortality. The predicted measure could act as a substitute for weighting newborns in communities where scales to weigh them are not routinely available.

Keywords: low birth weight, neonatal mortality, anthropometric measures, practical screening

Procedia PDF Downloads 75
18632 Forecasting the Sea Level Change in Strait of Hormuz

Authors: Hamid Goharnejad, Amir Hossein Eghbali

Abstract:

Recent investigations have demonstrated the global sea level rise due to climate change impacts. In this study climate changes study the effects of increasing water level in the strait of Hormuz. The probable changes of sea level rise should be investigated to employ the adaption strategies. The climatic output data of a GCM (General Circulation Model) named CGCM3 under climate change scenario of A1b and A2 were used. Among different variables simulated by this model, those of maximum correlation with sea level changes in the study region and least redundancy among themselves were selected for sea level rise prediction by using stepwise regression. One models of Discrete Wavelet artificial Neural Network (DWNN) was developed to explore the relationship between climatic variables and sea level changes. In these models, wavelet was used to disaggregate the time series of input and output data into different components and then ANN was used to relate the disaggregated components of predictors and predictands to each other. The results showed in the Shahid Rajae Station for scenario A1B sea level rise is among 64 to 75 cm and for the A2 Scenario sea level rise is among 90 to 105 cm. Furthermore the result showed a significant increase of sea level at the study region under climate change impacts, which should be incorporated in coastal areas management.

Keywords: climate change scenarios, sea-level rise, strait of Hormuz, forecasting

Procedia PDF Downloads 246
18631 Hybrid Intelligent Optimization Methods for Optimal Design of Horizontal-Axis Wind Turbine Blades

Authors: E. Tandis, E. Assareh

Abstract:

Designing the optimal shape of MW wind turbine blades is provided in a number of cases through evolutionary algorithms associated with mathematical modeling (Blade Element Momentum Theory). Evolutionary algorithms, among the optimization methods, enjoy many advantages, particularly in stability. However, they usually need a large number of function evaluations. Since there are a large number of local extremes, the optimization method has to find the global extreme accurately. The present paper introduces a new population-based hybrid algorithm called Genetic-Based Bees Algorithm (GBBA). This algorithm is meant to design the optimal shape for MW wind turbine blades. The current method employs crossover and neighborhood searching operators taken from the respective Genetic Algorithm (GA) and Bees Algorithm (BA) to provide a method with good performance in accuracy and speed convergence. Different blade designs, twenty-one to be exact, were considered based on the chord length, twist angle and tip speed ratio using GA results. They were compared with BA and GBBA optimum design results targeting the power coefficient and solidity. The results suggest that the final shape, obtained by the proposed hybrid algorithm, performs better compared to either BA or GA. Furthermore, the accuracy and speed convergence increases when the GBBA is employed

Keywords: Blade Design, Optimization, Genetic Algorithm, Bees Algorithm, Genetic-Based Bees Algorithm, Large Wind Turbine

Procedia PDF Downloads 301
18630 Designing Electronic Kanban in Assembly Line Tailboom at XYZ Corp to Reducing Lead Time

Authors: Nadhifah A. Nugraha, Dida D. Damayanti, Widia Juliani

Abstract:

Airplanes manufacturing is growing along with the increasing demand from consumers. The helicopter's tail called Tailboom is a product of the helicopter division at XYZ Corp, where the Tailboom assembly line is a pull system. Based on observations of existing conditions that occur at XYZ Corp, production is still unable to meet the demands of consumers; lead time occurs greater than the plan agreed upon by the consumers. In the assembly process, each work station experiences a lack of parts and components needed to assemble components. This happens because of the delay in getting the required part information, and there is no warning about the availability of parts needed, it makes some parts unavailable in assembly warehouse. The lack of parts and components from the previous work station causes the assembly process to stop, and the assembly line also stops at the next station. In its completion, the production time was late and not on the schedule. In resolving these problems, the controlling process is needed, which is controlling the assembly line to get all components and subassembly in the right amount and at the right time. This study applies one of Just In Time tools, namely Kanban and automation, should be added as efficiently and effectively communication line becomes electronic Kanban. The problem can be solved by reducing non-value added time, such as waiting time and idle time. The proposed results of controlling the assembly line of Tailboom result in a smooth assembly line without waiting, reduced lead time, and achieving production time according to the schedule agreed with the consumers.

Keywords: kanban, e-Kanban, lead time, pull system

Procedia PDF Downloads 90
18629 Subjective Temporal Resources: On the Relationship Between Time Perspective and Chronic Time Pressure to Burnout

Authors: Diamant Irene, Dar Tamar

Abstract:

Burnout, conceptualized within the framework of stress research, is to a large extent a result of a threat on resources of time or a feeling of time shortage. In reaction to numerous tasks, deadlines, high output, management of different duties encompassing work-home conflicts, many individuals experience ‘time pressure’. Time pressure is characterized as the perception of a lack of available time in relation to the amount of workload. It can be a result of local objective constraints, but it can also be a chronic attribute in coping with life. As such, time pressure is associated in the literature with general stress experience and can therefore be a direct, contributory burnout factor. The present study examines the relation of chronic time pressure – feeling of time shortage and of being rushed, with another central aspect in subjective temporal experience - time perspective. Time perspective is a stable personal disposition, capturing the extent to which people subjectively remember the past, live the present and\or anticipate the future. Based on Hobfoll’s Conservation of Resources Theory, it was hypothesized that individuals with chronic time pressure would experience a permanent threat on their time resources resulting in relatively increased burnout. In addition, it was hypothesized that different time perspective profiles, based on Zimbardo’s typology of five dimensions – Past Positive, Past Negative, Present Hedonistic, Present Fatalistic, and Future, would be related to different magnitudes of chronic time pressure and of burnout. We expected that individuals with ‘Past Negative’ or ‘Present Fatalist’ time perspectives would experience more burnout, with chronic time pressure being a moderator variable. Conversely, individuals with a ‘Present Hedonistic’ - with little concern with the future consequences of actions, would experience less chronic time pressure and less burnout. Another temporal experience angle examined in this study is the difference between the actual distribution of time (as in a typical day) versus desired distribution of time (such as would have been distributed optimally during a day). It was hypothesized that there would be a positive correlation between the gap between these time distributions and chronic time pressure and burnout. Data was collected through an online self-reporting survey distributed on social networks, with 240 participants (aged 21-65) recruited through convenience and snowball sampling methods from various organizational sectors. The results of the present study support the hypotheses and constitute a basis for future debate regarding the elements of burnout in the modern work environment, with an emphasis on subjective temporal experience. Our findings point to the importance of chronic and stable temporal experiences, as time pressure and time perspective, in occupational experience. The findings are also discussed with a view to the development of practical methods of burnout prevention.

Keywords: conservation of resources, burnout, time pressure, time perspective

Procedia PDF Downloads 152
18628 Keyloggers Prevention with Time-Sensitive Obfuscation

Authors: Chien-Wei Hung, Fu-Hau Hsu, Chuan-Sheng Wang, Chia-Hao Lee

Abstract:

Nowadays, the abuse of keyloggers is one of the most widespread approaches to steal sensitive information. In this paper, we propose an On-Screen Prompts Approach to Keyloggers (OSPAK) and its analysis, which is installed in public computers. OSPAK utilizes a canvas to cue users when their keystrokes are going to be logged or ignored by OSPAK. This approach can protect computers against recoding sensitive inputs, which obfuscates keyloggers with letters inserted among users' keystrokes. It adds a canvas below each password field in a webpage and consists of three parts: two background areas, a hit area and a moving foreground object. Letters at different valid time intervals are combined in accordance with their time interval orders, and valid time intervals are interleaved with invalid time intervals. It utilizes animation to visualize valid time intervals and invalid time intervals, which can be integrated in a webpage as a browser extension. We have tested it against a series of known keyloggers and also performed a study with 95 users to evaluate how easily the tool is used. Experimental results made by volunteers show that OSPAK is a simple approach.

Keywords: authentication, computer security, keylogger, privacy, information leakage

Procedia PDF Downloads 99