Search results for: travel time estimation
18226 Hansen Solubility Parameter from Surface Measurements
Authors: Neveen AlQasas, Daniel Johnson
Abstract:
Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied filmsKeywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements
Procedia PDF Downloads 9418225 Digital Games as a Means of Cultural Communication and Heritage Tourism: A Study on Black Myth-Wukong
Authors: Kung Wong Lau
Abstract:
On August 20, 2024, the global launch of the Wukong game generated significant enthusiasm within the gaming community. This game provides gamers with an immersive experience and some digital twins (the location) that effectively bridge cultural heritage and contemporary gaming, thereby facilitating heritage tourism to some extent. Travel websites highlight locations featured in the Wukong game, encouraging visitors to explore these sites. However, this area remains underexplored in cultural and communication studies, both locally and internationally. This pilot study aims to explore the potential of in-game cultural communication in Wukong for promoting Chinese culture and heritage tourism. An exploratory research methodology was employed, utilizing a focus group of non-Chinese active gamers on an online discussion platform. The findings suggest that the use of digital twins as a means to facilitate cultural communication and heritage tourism for non-Chinese gamers shows promise. While this pilot study cannot generalize its findings due to the limited number of participants, the insights gained could inform further discussions on the influential factors of cultural communication through gaming.Keywords: digital game, game culture, heritage tourism, cultural communication, non-Chinese gamers
Procedia PDF Downloads 1918224 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures
Authors: Rui Teixeira, Alan O’Connor, Maria Nogal
Abstract:
The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data
Procedia PDF Downloads 27218223 Outlier Detection in Stock Market Data using Tukey Method and Wavelet Transform
Authors: Sadam Alwadi
Abstract:
Outlier values become a problem that frequently occurs in the data observation or recording process. Thus, the need for data imputation has become an essential matter. In this work, it will make use of the methods described in the prior work to detect the outlier values based on a collection of stock market data. In order to implement the detection and find some solutions that maybe helpful for investors, real closed price data were obtained from the Amman Stock Exchange (ASE). Tukey and Maximum Overlapping Discrete Wavelet Transform (MODWT) methods will be used to impute the detect the outlier values.Keywords: outlier values, imputation, stock market data, detecting, estimation
Procedia PDF Downloads 8118222 Different Methods Anthocyanins Extracted from Saffron
Authors: Hashem Barati, Afshin Farahbakhsh
Abstract:
The flowers of saffron contain anthocyanins. Generally, extraction of anthocyanins takes place at low temperatures (below 30 °C), preferably under vacuum (to minimize degradation) and in an acidic environment. In order to extract anthocyanins, the dried petals were added to 30 ml of acidic ethanol (pH=2). Amount of petals, extraction time, temperature, and ethanol percentage which were selected. Total anthocyanin content was a function of both variables of ethanol percent and extraction time.To prepare SW with pH of 3.5, different concentrations of 100, 400, 700, 1,000, and 2,000 ppm of sodium metabisulfite were added to aqueous sodium citrate. At this selected concentration, different extraction times of 20, 40, 60, 120, 180 min were tested to determine the optimum extraction time. When the extraction time was extended from 20 to 60 min, the total recovered anthocyanins of sulfur method changed from 650 to 710 mg/100 g. In the EW method Cellubrix and Pectinex enzymes were added separately to the buffer solution at different concentrations of 1%, 2.5%, 5%, 7%, 10%, and 12.5% and held for 2 hours reaction time at an ambient temperature of 40 °C. There was a considerable and significant difference in trends of Acys content of tepals extracted by pectinex enzymes at 5% concentration and AE solution.Keywords: saffron, anthocyanins, acidic environment, acidic ethanol, pectinex enzymes, Cellubrix enzymes, sodium metabisulfite
Procedia PDF Downloads 51318221 Markov-Chain-Based Optimal Filtering and Smoothing
Authors: Garry A. Einicke, Langford B. White
Abstract:
This paper describes an optimum filter and smoother for recovering a Markov process message from noisy measurements. The developments follow from an equivalence between a state space model and a hidden Markov chain. The ensuing filter and smoother employ transition probability matrices and approximate probability distribution vectors. The properties of the optimum solutions are retained, namely, the estimates are unbiased and minimize the variance of the output estimation error, provided that the assumed parameter set are correct. Methods for estimating unknown parameters from noisy measurements are discussed. Signal recovery examples are described in which performance benefits are demonstrated at an increased calculation cost.Keywords: optimal filtering, smoothing, Markov chains
Procedia PDF Downloads 31718220 Counting People Utilizing Space-Time Imagery
Authors: Ahmed Elmarhomy, K. Terada
Abstract:
An automated method for counting passerby has been proposed using virtual-vertical measurement lines. Space-time image is representing the human regions which are treated using the segmentation process. Different color space has been used to perform the template matching. A proper template matching has been achieved to determine direction and speed of passing people. Distinguish one or two passersby has been investigated using a correlation between passerby speed and the human-pixel area. Finally, the effectiveness of the presented method has been experimentally verified.Keywords: counting people, measurement line, space-time image, segmentation, template matching
Procedia PDF Downloads 45218219 A Nonlinear Stochastic Differential Equation Model for Financial Bubbles and Crashes with Finite-Time Singularities
Authors: Haowen Xi
Abstract:
We propose and solve exactly a class of non-linear generalization of the Black-Scholes process of stochastic differential equations describing price bubble and crashes dynamics. As a result of nonlinear positive feedback, the faster-than-exponential price positive growth (bubble forming) and negative price growth (crash forming) are found to be the power-law finite-time singularity in which bubbles and crashes price formation ending at finite critical time tc. While most literature on the market bubble and crash process focuses on the nonlinear positive feedback mechanism aspect, very few studies concern the noise level on the same process. The present work adds to the market bubble and crashes literature by studying the external sources noise influence on the critical time tc of the bubble forming and crashes forming. Two main results will be discussed: (1) the analytical expression of expected value of the critical timeKeywords: bubble, crash, finite-time-singular, numerical simulation, price dynamics, stochastic differential equations
Procedia PDF Downloads 13218218 Numerical Methods versus Bjerksund and Stensland Approximations for American Options Pricing
Authors: Marasovic Branka, Aljinovic Zdravka, Poklepovic Tea
Abstract:
Numerical methods like binomial and trinomial trees and finite difference methods can be used to price a wide range of options contracts for which there are no known analytical solutions. American options are the most famous of that kind of options. Besides numerical methods, American options can be valued with the approximation formulas, like Bjerksund-Stensland formulas from 1993 and 2002. When the value of American option is approximated by Bjerksund-Stensland formulas, the computer time spent to carry out that calculation is very short. The computer time spent using numerical methods can vary from less than one second to several minutes or even hours. However to be able to conduct a comparative analysis of numerical methods and Bjerksund-Stensland formulas, we will limit computer calculation time of numerical method to less than one second. Therefore, we ask the question: Which method will be most accurate at nearly the same computer calculation time?Keywords: Bjerksund and Stensland approximations, computational analysis, finance, options pricing, numerical methods
Procedia PDF Downloads 45618217 Localization of Geospatial Events and Hoax Prediction in the UFO Database
Authors: Harish Krishnamurthy, Anna Lafontant, Ren Yi
Abstract:
Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at the National UFO Report Center (NUFORC). Some of these reports are a hoax and among those that seem legitimate, our task is not to establish that these events confirm that they indeed are events related to flying objects from aliens in outer space. Rather, we intend to identify if the report was a hoax as was identified by the UFO database team with their existing curation criterion. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about the proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and correlate with real-time events where one of the UFO reports strongly correlates to a missile test conducted in the United States.Keywords: time-series clustering, feature extraction, hoax prediction, geospatial events
Procedia PDF Downloads 37718216 Magnetohydrodynamic Couette Flow of Fractional Burger’s Fluid in an Annulus
Abstract:
Burgers’ fluid with a fractional derivatives model in an annulus was analyzed. Combining appropriately the basic equations, with the fractionalized fractional Burger’s fluid model allow us to determine the velocity field, temperature and shear stress. The governing partial differential equation was solved using the combine Laplace transformation method and Riemann sum approximation to give velocity field, temperature and shear stress on the fluid flow. The influence of various parameters like fractional parameters, relaxation time and retardation time, are drawn. The results obtained are simulated using Mathcad software and presented graphically. From the graphical results, we observed that the relaxation time and time helps the flow pattern, on the other hand, other material constants resist the fluid flow while fractional parameters effect on fluid flow is opposite to each other.Keywords: sani isa, Ali musaburger’s fluid, Laplace transform, fractional derivatives, annulus
Procedia PDF Downloads 2418215 Digital Forgery Detection by Signal Noise Inconsistency
Authors: Bo Liu, Chi-Man Pun
Abstract:
A novel technique for digital forgery detection by signal noise inconsistency is proposed in this paper. The forged area spliced from the other picture contains some features which may be inconsistent with the rest part of the image. Noise pattern and the level is a possible factor to reveal such inconsistency. To detect such noise discrepancies, the test picture is initially segmented into small pieces. The noise pattern and level of each segment are then estimated by using various filters. The noise features constructed in this step are utilized in energy-based graph cut to expose forged area in the final step. Experimental results show that our method provides a good illustration of regions with noise inconsistency in various scenarios.Keywords: forgery detection, splicing forgery, noise estimation, noise
Procedia PDF Downloads 46118214 Drop Impact Study on Flexible Superhydrophobic Surface Containing Micro-Nano Hierarchical Structures
Authors: Abinash Tripathy, Girish Muralidharan, Amitava Pramanik, Prosenjit Sen
Abstract:
Superhydrophobic surfaces are abundant in nature. Several surfaces such as wings of butterfly, legs of water strider, feet of gecko and the lotus leaf show extreme water repellence behaviour. Self-cleaning, stain-free fabrics, spill-resistant protective wears, drag reduction in micro-fluidic devices etc. are few applications of superhydrophobic surfaces. In order to design robust superhydrophobic surface, it is important to understand the interaction of water with superhydrophobic surface textures. In this work, we report a simple coating method for creating large-scale flexible superhydrophobic paper surface. The surface consists of multiple layers of silanized zirconia microparticles decorated with zirconia nanoparticles. Water contact angle as high as 159±10 and contact angle hysteresis less than 80 was observed. Drop impact studies on superhydrophobic paper surface were carried out by impinging water droplet and capturing its dynamics through high speed imaging. During the drop impact, the Weber number was varied from 20 to 80 by altering the impact velocity of the drop and the parameters such as contact time, normalized spread diameter were obtained. In contrast to earlier literature reports, we observed contact time to be dependent on impact velocity on superhydrophobic surface. Total contact time was split into two components as spread time and recoil time. The recoil time was found to be dependent on the impact velocity while the spread time on the surface did not show much variation with the impact velocity. Further, normalized spreading parameter was found to increase with increase in impact velocity.Keywords: contact angle, contact angle hysteresis, contact time, superhydrophobic
Procedia PDF Downloads 42618213 A Comparison of Smoothing Spline Method and Penalized Spline Regression Method Based on Nonparametric Regression Model
Authors: Autcha Araveeporn
Abstract:
This paper presents a study about a nonparametric regression model consisting of a smoothing spline method and a penalized spline regression method. We also compare the techniques used for estimation and prediction of nonparametric regression model. We tried both methods with crude oil prices in dollars per barrel and the Stock Exchange of Thailand (SET) index. According to the results, it is concluded that smoothing spline method performs better than that of penalized spline regression method.Keywords: nonparametric regression model, penalized spline regression method, smoothing spline method, Stock Exchange of Thailand (SET)
Procedia PDF Downloads 44018212 Impact of Belongingness, Relational Communication, Religiosity and Screen Time of College Student Levels of Anxiety
Authors: Cherri Kelly Seese, Renee Bourdeaux, Sarah Drivdahl
Abstract:
Emergent adults in the United States are currently experiencing high levels of anxiety. It is imperative to uncover insulating factors which mitigate the impact of anxiety. This study aims to explore how constructs such as belongingness, relational communication, screen time and religiosity impact anxiety levels of emerging adults. Approximately 250 college students from a small, private university on the West Coast were given an online assessment that included: the General Belongingness Scale, Relational Communication Scale, Duke University Religion Index (DUREL), a survey of screen time, and the Beck Anxiety Inventory. A MANOVA statistical test was conducted by assessing the effects of multiple dependent variables (scores on GBS, RCS, self-reported screen time and DUREL) on the four different levels of anxiety as measured on the BAI (minimal = 1, mild =2, moderate = 3, or severe = 4). Results indicated a significant relationship between one’s sense of belonging and one’s reported level of anxiety. These findings have implications for systems, like universities, churches, and corporations that want to improve young adults’ level of anxiety.Keywords: anxiety, belongingness, relational communication, religiosity, screen time
Procedia PDF Downloads 17418211 Characteristics of Elastic Tracked-Crawler Based on Worm-Rack Mechanism
Authors: Jun-ya Nagase
Abstract:
There are many pipes such as a water pipe and a gas pipe in a chemical plant and house. It is possible to prevent accidents by these inspections. However, many pipes are very narrow and it is difficult for people to inspect directly. Therefore, development of a robot that can move in narrow pipe is necessary. A wheel movement type robot, a snake-like robot and a multi-leg robot are all described in the relevant literature as pipe inspection robots that are currently studied. Among them, the tracked crawler robot can travel by traversing uneven ground flexibly with a crawler belt attached firmly to the ground surface. Although conventional crawler robots have high efficiency and/or high ground-covering ability, they require a comparatively large space to move. In this study, a cylindrical crawler robot based on worm-rack mechanism, which does not need large space to move and which has high ground-covering ability, is proposed. Experiments have demonstrated smooth operation and a forward movement of the robot by application of voltage to the motor. In addition, performance tests show that it can propel itself in confined spaces. This paper reports the structure, drive mechanism, prototype, and experimental evaluation.Keywords: tracked-crawler, pipe inspection robot, worm-rack mechanism, amoeba locomotion
Procedia PDF Downloads 43118210 Performance Improvement of Cooperative Scheme in Wireless OFDM Systems
Authors: Ki-Ro Kim, Seung-Jun Yu, Hyoung-Kyu Song
Abstract:
Recently, the wireless communication systems are required to have high quality and provide high bit rate data services. Researchers have studied various multiple antenna scheme to meet the demand. In practical application, it is difficult to deploy multiple antennas for limited size and cost. Cooperative diversity techniques are proposed to overcome the limitations. Cooperative communications have been widely investigated to improve performance of wireless communication. Among diversity schemes, space-time block code has been widely studied for cooperative communication systems. In this paper, we propose a new cooperative scheme using pre-coding and space-time block code. The proposed cooperative scheme provides improved error performance than a conventional cooperative scheme using space-time block coding scheme.Keywords: cooperative communication, space-time block coding, pre-coding
Procedia PDF Downloads 35918209 Management in the Transport of Pigs to Slaughterhouses in the Valle De Aburrá, Antioquia
Authors: Natalia Uribe Corrales, María Fernanda Benavides Erazo, Santiago Henao Villegas
Abstract:
Introduction: Transport is a crucial link in the porcine chain because it is considered a stressful event in the animal, due to it is a new environment, which generates new interactions, together with factors such as speed, noise, temperature changes, vibrations, deprivation of food and water. Therefore, inadequate handling at this stage can lead to bruises, musculoskeletal injuries, fatigue, and mortality, resulting in canal seizures and economic losses. Objective: To characterize the transport and driving practices for the mobilization of standing pigs directed to slaughter plants in the Valle de Aburrá, Antioquia, Colombia in 2017. Methods: A descriptive cross-sectional study was carried out with the transporters arriving at the slaughterhouses approved by National Institute for Food and Medicine Surveillance (INVIMA) during 2017 in the Valle de Aburrá. The process of obtaining the samples was made from probabilistic sampling. Variables such as journey time, mechanical technical certificate, training in animal welfare, driving speed, material, and condition of floors and separators, supervision of animals during the trip, load density and mortality were analyzed. It was approved by the ethics committee for the use and care of animals CICUA of CES University, Act number 14 of 2015. Results: 190 trucks were analyzed, finding that 12.4% did not have updated mechanical technical certificate; the transporters experience in pig’s transportation was an average of 9.4 years (d.e.7.5). The 85.8% reported not having received training in animal welfare. Other results were that the average speed was 63.04km/hr (d.e 13.46) and the 62% had floors in good condition; nevertheless, the 48% had bad conditions on separators. On the other hand, the 88% did not supervise their animals during the journey, although the 62.2% had an adequate loading density, in relation to the average mortality was 0.2 deaths/travel (d.e. 0.5). Conclusions: Trainers should be encouraged on issues such as proper maintenance of vehicles, animal welfare, obligatory review of animals during mobilization and speed of driving, as these poorly managed indicators generate stress in animals, increasing generation of injuries as well as possible accidents; also, it is necessary to continue to improve aspects such as aluminum floors and separators that favor easy cleaning and maintenance, as well as the appropriate handling in the density of load that generates animal welfare.Keywords: animal welfare, driving practices, pigs, truck infrastructure
Procedia PDF Downloads 20818208 A Trapezoidal-Like Integrator for the Numerical Solution of One-Dimensional Time Dependent Schrödinger Equation
Authors: Johnson Oladele Fatokun, I. P. Akpan
Abstract:
In this paper, the one-dimensional time dependent Schrödinger equation is discretized by the method of lines using a second order finite difference approximation to replace the second order spatial derivative. The evolving system of stiff ordinary differential equation (ODE) in time is solved numerically by an L-stable trapezoidal-like integrator. Results show accuracy of relative maximum error of order 10-4 in the interval of consideration. The performance of the method as compared to an existing scheme is considered favorable.Keywords: Schrodinger’s equation, partial differential equations, method of lines (MOL), stiff ODE, trapezoidal-like integrator
Procedia PDF Downloads 41818207 Effect of Aging Time and Mass Concentration on the Rheological Behavior of Vase of Dam
Authors: Hammadi Larbi
Abstract:
Water erosion, the main cause of the siltation of a dam, is a natural phenomenon governed by natural physical factors such as aggressiveness, climate change, topography, lithology, and vegetation cover. Currently, a vase from certain dams is released downstream of the dikes during devastation by hydraulic means. The vases are characterized by complex rheological behaviors: rheofluidification, yield stress, plasticity, and thixotropy. In this work, we studied the effect of the aging time of the vase in the dam and the mass concentration of the vase on the flow behavior of a vase from the Fergoug dam located in the Mascara region. In order to test the reproducibility of results, two replicates were performed for most of the experiments. The flow behavior of the vase studied as a function of storage time and mass concentration is analyzed by the Herschel Bulkey model. The increase in the aging time of the vase in the dam causes an increase in the yield stress and the consistency index of the vase. This phenomenon can be explained by the adsorption of the water by the vase and the increase in volume by swelling, which modifies the rheological parameters of the vase. The increase in the mass concentration in the vase leads to an increase in the yield stress and the consistency index as a function of the concentration. This behavior could be explained by interactions between the granules of the vase suspension. On the other hand, the increase in the aging time and the mass concentration of the vase in the dam causes a reduction in the flow index of the vase. The study also showed an exponential decrease in apparent viscosity with the increase in the aging time of the vase in the dam. If a vase is allowed to age long enough for the yield stress to be close to infinity, its apparent viscosity is also close to infinity; then the apparent viscosity also tends towards infinity; this can, for example, subsequently pose problems when dredging dams. For good dam management, it could be then deduced to reduce the dredging time of the dams as much as possible.Keywords: vase of dam, aging time, rheological behavior, yield stress, apparent viscosity, thixotropy
Procedia PDF Downloads 2818206 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem
Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee
Abstract:
Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research
Procedia PDF Downloads 33618205 Characteristics of Pore Pressure and Effective Stress Changes in Sandstone Reservoir Due to Hydrocarbon Production
Authors: Kurniawan Adha, Wan Ismail Wan Yusoff, Luluan Almanna Lubis
Abstract:
Preventing hazardous events during oil and gas operation is an important contribution of accurate pore pressure data. The availability of pore pressure data also contribute in reducing the operation cost. Suggested methods in pore pressure estimation were mostly complex by the many assumptions and hypothesis used. Basic properties which may have significant impact on estimation model are somehow being neglected. To date, most of pore pressure determinations are estimated by data model analysis and rarely include laboratory analysis, stratigraphy study or core check measurement. Basically, this study developed a model that might be applied to investigate the changes of pore pressure and effective stress due to hydrocarbon production. In general, this paper focused velocity model effect of pore pressure and effective stress changes due to hydrocarbon production with illustrated by changes in saturation. The core samples from Miri field from Sarawak Malaysia ware used in this study, where the formation consists of sandstone reservoir. The study area is divided into sixteen (16) layers and encompassed six facies (A-F) from the outcrop that is used for stratigraphy sequence model. The experimental work was firstly involving data collection through field study and developing stratigraphy sequence model based on outcrop study. Porosity and permeability measurements were then performed after samples were cut into 1.5 inch diameter core samples. Next, velocity was analyzed using SONIC OYO and AutoLab 500. Three (3) scenarios of saturation were also conducted to exhibit the production history of the samples used. Results from this study show the alterations of velocity for different saturation with different actions of effective stress and pore pressure. It was observed that sample with water saturation has the highest velocity while dry sample has the lowest value. In comparison with oil to samples with oil saturation, water saturated sample still leads with the highest value since water has higher fluid density than oil. Furthermore, water saturated sample exhibits velocity derived parameters, such as poisson’s ratio and P-wave velocity over S-wave velocity (Vp/Vs) The result shows that pore pressure value ware reduced due to the decreasing of fluid content. The decreasing of pore pressure result may soften the elastic mineral frame and have tendency to possess high velocity. The alteration of pore pressure by the changes in fluid content or saturation resulted in alteration of velocity value that has proportionate trend with the effective stress.Keywords: pore pressure, effective stress, production, miri formation
Procedia PDF Downloads 28918204 Indoor Robot Positioning with Precise Correlation Computations over Walsh-Coded Lightwave Signal Sequences
Authors: Jen-Fa Huang, Yu-Wei Chiu, Jhe-Ren Cheng
Abstract:
Visible light communication (VLC) technique has become useful method via LED light blinking. Several issues on indoor mobile robot positioning with LED blinking are examined in the paper. In the transmitter, we control the transceivers blinking message. Orthogonal Walsh codes are adopted for such purpose on auto-correlation function (ACF) to detect signal sequences. In the robot receiver, we set the frame of time by 1 ns passing signal from the transceiver to the mobile robot. After going through many periods of time detecting the peak value of ACF in the mobile robot. Moreover, the transceiver transmits signal again immediately. By capturing three times of peak value, we can know the time difference of arrival (TDOA) between two peak value intervals and finally analyze the accuracy of the robot position.Keywords: Visible Light Communication, Auto-Correlation Function (ACF), peak value of ACF, Time difference of Arrival (TDOA)
Procedia PDF Downloads 32618203 On the Creep of Concrete Structures
Authors: A. Brahma
Abstract:
Analysis of deferred deformations of concrete under sustained load shows that the creep has a leading role on deferred deformations of concrete structures. Knowledge of the creep characteristics of concrete is a Necessary starting point in the design of structures for crack control. Such knowledge will enable the designer to estimate the probable deformation in pre-stressed concrete or reinforced and the appropriate steps can be taken in design to accommodate this movement. In this study, we propose a prediction model that involves the acting principal parameters on the deferred behaviour of concrete structures. For the estimation of the model parameters Levenberg-Marquardt method has proven very satisfactory. A confrontation between the experimental results and the predictions of models designed shows that it is well suited to describe the evolution of the creep of concrete structures.Keywords: concrete structure, creep, modelling, prediction
Procedia PDF Downloads 29118202 Strategy Management of Soybean (Glycine max L.) for Dealing with Extreme Climate through the Use of Cropsyst Model
Authors: Aminah Muchdar, Nuraeni, Eddy
Abstract:
The aims of the research are: (1) to verify the cropsyst plant model of experimental data in the field of soybean plants and (2) to predict planting time and potential yield soybean plant with the use of cropsyst model. This research is divided into several stages: (1) first calibration stage which conducted in the field from June until September 2015.(2) application models stage, where the data obtained from calibration in the field will be included in cropsyst models. The required data models are climate data, ground data/soil data,also crop genetic data. The relationship between the obtained result in field with simulation cropsyst model indicated by Efficiency Index (EF) which the value is 0,939.That is showing that cropsyst model is well used. From the calculation result RRMSE which the value is 1,922%.That is showing that comparative fault prediction results from simulation with result obtained in the field is 1,92%. The conclusion has obtained that the prediction of soybean planting time cropsyst based models that have been made valid for use. and the appropriate planting time for planting soybeans mainly on rain-fed land is at the end of the rainy season, in which the above study first planting time (June 2, 2015) which gives the highest production, because at that time there was still some rain. Tanggamus varieties more resistant to slow planting time cause the percentage decrease in the yield of each decade is lower than the average of all varieties.Keywords: soybean, Cropsyst, calibration, efficiency Index, RRMSE
Procedia PDF Downloads 18018201 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 12618200 Black-Hole Dimension: A Distinct Methodology of Understanding Time, Space and Data in Architecture
Authors: Alp Arda
Abstract:
Inspired by Nolan's ‘Interstellar’, this paper delves into speculative architecture, asking, ‘What if an architect could traverse time to study a city?’ It unveils the ‘Black-Hole Dimension,’ a groundbreaking concept that redefines urban identities beyond traditional boundaries. Moving past linear time narratives, this approach draws from the gravitational dynamics of black holes to enrich our understanding of urban and architectural progress. By envisioning cities and structures as influenced by black hole-like forces, it enables an in-depth examination of their evolution through time and space. The Black-Hole Dimension promotes a temporal exploration of architecture, treating spaces as narratives of their current state interwoven with historical layers. It advocates for viewing architectural development as a continuous, interconnected journey molded by cultural, economic, and technological shifts. This approach not only deepens our understanding of urban evolution but also empowers architects and urban planners to create designs that are both adaptable and resilient. Echoing themes from popular culture and science fiction, this methodology integrates the captivating dynamics of time and space into architectural analysis, challenging established design conventions. The Black-Hole Dimension champions a philosophy that welcomes unpredictability and complexity, thereby fostering innovation in design. In essence, the Black-Hole Dimension revolutionizes architectural thought by emphasizing space-time as a fundamental dimension. It reimagines our built environments as vibrant, evolving entities shaped by the relentless forces of time, space, and data. This groundbreaking approach heralds a future in architecture where the complexity of reality is acknowledged and embraced, leading to the creation of spaces that are both responsive to their temporal context and resilient against the unfolding tapestry of time.Keywords: black-hole, timeline, urbanism, space and time, speculative architecture
Procedia PDF Downloads 7318199 Changes in Kidney Tissue at Postmortem Magnetic Resonance Imaging Depending on the Time of Fetal Death
Authors: Uliana N. Tumanova, Viacheslav M. Lyapin, Vladimir G. Bychenko, Alexandr I. Shchegolev, Gennady T. Sukhikh
Abstract:
All cases of stillbirth undoubtedly subject to postmortem examination, since it is necessary to find out the cause of the stillbirths, as well as a forecast of future pregnancies and their outcomes. Determination of the time of death is an important issue which is addressed during the examination of the body of a stillborn. It is mean the period from the time of death until the birth of the fetus. The time for fetal deaths determination is based on the assessment of the severity of the processes of maceration. To study the possibilities of postmortem magnetic resonance imaging (MRI) for determining the time of intrauterine fetal death based on the evaluation of maceration in the kidney. We have conducted MRI morphological comparisons of 7 dead fetuses (18-21 gestational weeks) and 26 stillbirths (22-39 gestational weeks), and 15 bodies of died newborns at the age of 2 hours – 36 days. Postmortem MRI 3T was performed before the autopsy. The signal intensity of the kidney tissue (SIK), pleural fluid (SIF), external air (SIA) was determined on T1-WI and T2-WI. Macroscopic and histological signs of maceration severity and time of death were evaluated in the autopsy. Based on the results of the morphological study, the degree of maceration varied from 0 to 4. In 13 cases, the time of intrauterine death was up to 6 hours, in 2 cases - 6-12 hours, in 4 -12-24 hours, in 9 -2-3 days, in 3 -1 week, in 2 -1,5-2 weeks. At 15 dead newborns, signs of maceration were absent, naturally. Based on the data from SIK, SIF, SIA on MR-tomograms, we calculated the coefficient of MR-maceration (M). The calculation of the time of intrauterine death (MP-t) (hours) was performed by our formula: МR-t = 16,87+95,38×М²-75,32×М. A direct positive correlation of MR-t and autopsy data from the dead at the gestational ages 22-40 weeks, with a dead time, not more than 1 week, was received. The maceration at the antenatal fetal death is characterized by changes in T1-WI and T2-WI signals at postmortem MRI. The calculation of MP-t allows defining accurately the time of intrauterine death within one week at the stillbirths who died on 22-40 gestational weeks. Thus, our study convincingly demonstrates that radiological methods can be used for postmortem study of the bodies, in particular, the bodies of stillborn to determine the time of intrauterine death. Postmortem MRI allows for an objective and sufficiently accurate analysis of pathological processes with the possibility of their documentation, storage, and analysis after the burial of the body.Keywords: intrauterine death, maceration, postmortem MRI, stillborn
Procedia PDF Downloads 12518198 Support for Planning of Mobile Personnel Tasks by Solving Time-Dependent Routing Problems
Authors: Wlodzimierz Ogryczak, Tomasz Sliwinski, Jaroslaw Hurkala, Mariusz Kaleta, Bartosz Kozlowski, Piotr Palka
Abstract:
Implementation concepts of a decision support system for planning and management of mobile personnel tasks (sales representatives and others) are discussed. Large-scale periodic time-dependent vehicle routing and scheduling problems with complex constraints are solved for this purpose. Complex nonuniform constraints with respect to frequency, time windows, working time, etc. are taken into account with additional fast adaptive procedures for operational rescheduling of plans in the presence of various disturbances. Five individual solution quality indicators with respect to a single personnel person are considered. This paper deals with modeling issues corresponding to the problem and general solution concepts. The research was supported by the European Union through the European Regional Development Fund under the Operational Programme ‘Innovative Economy’ for the years 2007-2013; Priority 1 Research and development of modern technologies under the project POIG.01.03.01-14-076/12: 'Decision Support System for Large-Scale Periodic Vehicle Routing and Scheduling Problems with Complex Constraints.'Keywords: mobile personnel management, multiple criteria, time dependent, time windows, vehicle routing and scheduling
Procedia PDF Downloads 32318197 A Comparative Analysis of Asymmetric Encryption Schemes on Android Messaging Service
Authors: Mabrouka Algherinai, Fatma Karkouri
Abstract:
Today, Short Message Service (SMS) is an important means of communication. SMS is not only used in informal environment for communication and transaction, but it is also used in formal environments such as institutions, organizations, companies, and business world as a tool for communication and transactions. Therefore, there is a need to secure the information that is being transmitted through this medium to ensure security of information both in transit and at rest. But, encryption has been identified as a means to provide security to SMS messages in transit and at rest. Several past researches have proposed and developed several encryption algorithms for SMS and Information Security. This research aims at comparing the performance of common Asymmetric encryption algorithms on SMS security. The research employs the use of three algorithms, namely RSA, McEliece, and RABIN. Several experiments were performed on SMS of various sizes on android mobile device. The experimental results show that each of the three techniques has different key generation, encryption, and decryption times. The efficiency of an algorithm is determined by the time that it takes for encryption, decryption, and key generation. The best algorithm can be chosen based on the least time required for encryption. The obtained results show the least time when McEliece size 4096 is used. RABIN size 4096 gives most time for encryption and so it is the least effective algorithm when considering encryption. Also, the research shows that McEliece size 2048 has the least time for key generation, and hence, it is the best algorithm as relating to key generation. The result of the algorithms also shows that RSA size 1024 is the most preferable algorithm in terms of decryption as it gives the least time for decryption.Keywords: SMS, RSA, McEliece, RABIN
Procedia PDF Downloads 163