Search results for: time truncated experiment
19063 A Case Study of Determining the Times of Overhauls and the Number of Spare Parts for Repairable Items in Rolling Stocks with Simulation
Authors: Ji Young Lee, Jong Woon Kim
Abstract:
It is essential to secure high availability of railway vehicles to realize high quality and efficiency of railway service. Once the availability decreased, planned railway service could not be provided or more cars need to be reserved. additional cars need to be purchased or the frequency of railway service could be decreased. Such situation would be a big loss in terms of quality and cost related to railway service. Therefore, we make various efforts to get high availability of railway vehicles. Because it is a big loss to operators, we make various efforts to get high availability of railway vehicles. To secure high availability, the idle time of the vehicle needs to be reduced and the following methods are applied to railway vehicles. First, through modularization design, exchange time for line replaceable units is reduced which makes railway vehicles could be put into the service quickly. Second, to reduce periodic preventive maintenance time, preventive maintenance with short period would be proceeded test oriented to minimize the maintenance time, and reliability is secured through overhauls for each main component. With such design changes for railway vehicles, modularized components are exchanged first at the time of vehicle failure or overhaul so that vehicles could be put into the service quickly and exchanged components are repaired or overhauled. Therefore, spare components are required for any future failures or overhauls. And, as components are modularized and costs for components are high, it is considerably important to get reasonable quantities of spare components. Especially, when a number of railway vehicles were put into the service simultaneously, the time of overhauls come almost at the same time. Thus, for some vehicles, components need to be exchanged and overhauled before appointed overhaul period so that these components could be secured as spare parts for the next vehicle’s component overhaul. For this reason, components overhaul time and spare parts quantities should be decided at the same time. This study deals with the time of overhauls for repairable components of railway vehicles and the calculation of spare parts quantities in consideration of future failure/overhauls. However, as railway vehicles are used according to the service schedule, maintenance work cannot be proceeded after the service was closed thus it is quite difficult to resolve this situation mathematically. In this study, Simulation software system is used in this study for analyzing the time of overhauls for repairable components of railway vehicles and the spare parts for the railway systems.Keywords: overhaul time, rolling stocks, simulation, spare parts
Procedia PDF Downloads 33719062 Study of Operating Conditions Impact on Physicochemical and Functional Properties of Dairy Powder Produced by Spray-drying
Authors: Adeline Meriaux, Claire Gaiani, Jennifer Burgain, Frantz Fournier, Lionel Muniglia, Jérémy Petit
Abstract:
Spray-drying process is widely used for the production of dairy powders for food and pharmaceuticals industries. It involves the atomization of a liquid feed into fine droplets, which are subsequently dried through contact with a hot air flow. The resulting powders permit transportation cost reduction and shelf life increase but can also exhibit various interesting functionalities (flowability, solubility, protein modification or acid gelation), depending on operating conditions and milk composition. Indeed, particles porosity, surface composition, lactose crystallization, protein denaturation, protein association or crust formation may change. Links between spray-drying conditions and physicochemical and functional properties of powders were investigated by a design of experiment methodology and analyzed by principal component analysis. Quadratic models were developed, and multicriteria optimization was carried out by the use of genetic algorithm. At the time of abstract submission, verification spray-drying trials are ongoing. To perform experiments, milk from dairy farm was collected, skimmed, froze and spray-dried at different air pressure (between 1 and 3 bars) and outlet temperature (between 75 and 95 °C). Dry matter, minerals content and proteins content were determined by standard method. Solubility index, absorption index and hygroscopicity were determined by method found in literature. Particle size distribution were obtained by laser diffraction granulometry. Location of the powder color in the Cielab color space and water activity were characterized by a colorimeter and an aw-value meter, respectively. Flow properties were characterized with FT4 powder rheometer; in particular, compressibility and shearing test were performed. Air pressure and outlet temperature are key factors that directly impact the drying kinetics and powder characteristics during spray-drying process. It was shown that the air pressure affects the particle size distribution by impacting the size of droplet exiting the nozzle. Moreover, small particles lead to more cohesive powder and less saturated color of powders. Higher outlet temperature results in lower moisture level particles which are less sticky and can explain a spray-drying yield increase and the higher cohesiveness; it also leads to particle with low water activity because of the intense evaporation rate. However, it induces a high hygroscopicity, thus, powders tend to get wet rapidly if they are not well stored. On the other hand, high temperature provokes a decrease of native serum proteins, which is positively correlated to gelation properties (gel point and firmness). Partial denaturation of serum proteins can improve functional properties of powder. The control of air pressure and outlet temperature during the spray-drying process significantly affects the physicochemical and functional properties of powder. This study permitted to better understand the links between physicochemical and functional properties of powder to identify correlations between air pressure and outlet temperature. Therefore, mathematical models have been developed, and the use of genetic algorithm will allow the optimization of powder functionalities.Keywords: dairy powders, spray-drying, powders functionalities, design of experiment
Procedia PDF Downloads 6519061 Investigation of Riprap Stability on Roughness Bridge Pier in River Bend
Authors: A. Alireza Masjedi, B. Amir Taeedi
Abstract:
In this research, by placing the two cylindrical piers without roughness and with roughness with riprap around its, they proceeded to a series of tests. Experiments were done by three relative diameters of riprap with density 2.1 and one rate of discharge 27 lit/s under pure water condition. In each experiment, flow depth measured in terms of failure threshold then stability number calculated by using data obtained. The results of the research showed that the riprap stability in pier with roughness is more pier without roughness because of the pier with roughness is sharp-pointed and reduced horseshoe vortex.Keywords: riprap stability, roughness, river bend, froude number
Procedia PDF Downloads 35619060 Effect of Riprap Stability on Roughness Bridge Pier in River Bend
Authors: Alireza Masjedi, Amir Taeedi
Abstract:
In this research, by placing the two cylindrical piers without roughness and with roughness with riprap around its, they proceeded to a series of tests. Experiments were done by three relative diameters of riprap with density 2.1 and one rate of discharge 27 lit/s under pure water condition. In each experiment, flow depth measured in terms of failure threshold then stability number calculated by using data obtained. The results of the research showed that the riprap stability in pier with roughness is more pier without roughness because of the pier with roughness is sharp-pointed and reduced horseshoe vortex.Keywords: riprap stability, roughness, river bend, froude number
Procedia PDF Downloads 35419059 Study of the Energy Levels in the Structure of the Laser Diode GaInP
Authors: Abdelali Laid, Abid Hamza, Zeroukhi Houari, Sayah Naimi
Abstract:
This work relates to the study of the energy levels and the optimization of the Parameter intrinsic (a number of wells and their widths, width of barrier of potential, index of refraction etc.) and extrinsic (temperature, pressure) in the Structure laser diode containing the structure GaInP. The methods of calculation used; - method of the empirical pseudo potential to determine the electronic structures of bands, - graphic method for optimization. The found results are in concord with those of the experiment and the theory.Keywords: semi-conductor, GaInP/AlGaInP, pseudopotential, energy, alliages
Procedia PDF Downloads 49419058 A Case Study Comparing the Effect of Computer Assisted Task-Based Language Teaching and Computer-Assisted Form Focused Language Instruction on Language Production of Students Learning Arabic as a Foreign Language
Authors: Hanan K. Hassanein
Abstract:
Task-based language teaching (TBLT) and focus on form instruction (FFI) methods were proven to improve quality and quantity of immediate language production. However, studies that compare between the effectiveness of the language production when using TBLT versus FFI are very little with results that are not consistent. Moreover, teaching Arabic using TBLT is a new field with few research that has investigated its application inside classrooms. Furthermore, to the best knowledge of the researcher, there are no prior studies that compared teaching Arabic as a foreign language in a classroom setting using computer-assisted task-based language teaching (CATBLT) with computer-assisted form focused language instruction (CAFFI). Accordingly, the focus of this presentation is to display CATBLT and CAFFI tools when teaching Arabic as a foreign language as well as demonstrate an experimental study that aims to identify whether or not CATBLT is a more effective instruction method. The effectiveness will be determined through comparing CATBLT and CAFFI in terms of accuracy, lexical complexity, and fluency of language produced by students. The participants of the study are 20 students enrolled in two intermediate-level Arabic as a foreign language classes. The experiment will take place over the course of 7 days. Based on a study conducted by Abdurrahman Arslanyilmaz for teaching Turkish as a second language, an in-house computer assisted tool for the TBLT and another one for FFI will be designed for the experiment. The experimental group will be instructed using the in-house CATBLT tool and the control group will be taught through the in-house CAFFI tool. The data that will be analyzed are the dialogues produced by students in both the experimental and control groups when completing a task or communicating in conversational activities. The dialogues of both groups will be analyzed to understand the effect of the type of instruction (CATBLT or CAFFI) on accuracy, lexical complexity, and fluency. Thus, the study aims to demonstrate whether or not there is an instruction method that positively affects the language produced by students learning Arabic as a foreign language more than the other.Keywords: computer assisted language teaching, foreign language teaching, form-focused instruction, task based language teaching
Procedia PDF Downloads 25219057 Recirculated Sedimentation Method to Control Contamination for Algal Biomass Production
Authors: Ismail S. Bostanci, Ebru Akkaya
Abstract:
Microalgae-derived biodiesel, fertilizer or industrial chemicals' production with wastewater has great potential. Especially water from a municipal wastewater treatment plant is a very important nutrient source for biofuel production. Microalgae biomass production in open ponds system is lower cost culture systems. There are many hurdles for commercial algal biomass production in large scale. One of the important technical bottlenecks for microalgae production in open system is culture contamination. The algae culture contaminants can generally be described as invading organisms which could cause pond crash. These invading organisms can be competitors, parasites, and predators. Contamination is unavoidable in open systems. Potential contaminant organisms are already inoculated if wastewater is utilized for algal biomass cultivation. Especially, it is important to control contaminants to retain in acceptable level in order to reach true potential of algal biofuel production. There are several contamination management methods in algae industry, ranging from mechanical, chemical, biological and growth condition change applications. However, none of them are accepted as a suitable contamination control method. This experiment describes an innovative contamination control method, 'Recirculated Sedimentation Method', to manage contamination to avoid pond cash. The method can be used for the production of algal biofuel, fertilizer etc. and algal wastewater treatment. To evaluate the performance of the method on algal culture, an experiment was conducted for 90 days at a lab-scale raceway (60 L) reactor with the use of non-sterilized and non-filtered wastewater (secondary effluent and centrate of anaerobic digestion). The application of the method provided the following; removing contaminants (predators and diatoms) and other debris from reactor without discharging the culture (with microscopic evidence), increasing raceway tank’s suspended solids holding capacity (770 mg L-1), increasing ammonium removal rate (29.83 mg L-1 d-1), decreasing algal and microbial biofilm formation on inner walls of reactor, washing out generated nitrifier from reactor to prevent ammonium consumption.Keywords: contamination control, microalgae culture contamination, pond crash, predator control
Procedia PDF Downloads 20719056 Different Methods Anthocyanins Extracted from Saffron
Authors: Hashem Barati, Afshin Farahbakhsh
Abstract:
The flowers of saffron contain anthocyanins. Generally, extraction of anthocyanins takes place at low temperatures (below 30 °C), preferably under vacuum (to minimize degradation) and in an acidic environment. In order to extract anthocyanins, the dried petals were added to 30 ml of acidic ethanol (pH=2). Amount of petals, extraction time, temperature, and ethanol percentage which were selected. Total anthocyanin content was a function of both variables of ethanol percent and extraction time.To prepare SW with pH of 3.5, different concentrations of 100, 400, 700, 1,000, and 2,000 ppm of sodium metabisulfite were added to aqueous sodium citrate. At this selected concentration, different extraction times of 20, 40, 60, 120, 180 min were tested to determine the optimum extraction time. When the extraction time was extended from 20 to 60 min, the total recovered anthocyanins of sulfur method changed from 650 to 710 mg/100 g. In the EW method Cellubrix and Pectinex enzymes were added separately to the buffer solution at different concentrations of 1%, 2.5%, 5%, 7%, 10%, and 12.5% and held for 2 hours reaction time at an ambient temperature of 40 °C. There was a considerable and significant difference in trends of Acys content of tepals extracted by pectinex enzymes at 5% concentration and AE solution.Keywords: saffron, anthocyanins, acidic environment, acidic ethanol, pectinex enzymes, Cellubrix enzymes, sodium metabisulfite
Procedia PDF Downloads 51419055 Tea and Its Working Methodology in the Biomass Estimation of Poplar Species
Authors: Pratima Poudel, Austin Himes, Heidi Renninger, Eric McConnel
Abstract:
Populus spp. (poplar) are the fastest-growing trees in North America, making them ideal for a range of applications as they can achieve high yields on short rotations and regenerate by coppice. Furthermore, poplar undergoes biochemical conversion to fuels without complexity, making it one of the most promising, purpose-grown, woody perennial energy sources. Employing wood-based biomass for bioenergy offers numerous benefits, including reducing greenhouse gas (GHG) emissions compared to non-renewable traditional fuels, the preservation of robust forest ecosystems, and creating economic prospects for rural communities.In order to gain a better understanding of the potential use of poplar as a biomass feedstock for biofuel in the southeastern US, the conducted a techno-economic assessment (TEA). This assessment is an analytical approach that integrates technical and economic factors of a production system to evaluate its economic viability. the TEA specifically focused on a short rotation coppice system employing a single-pass cut-and-chip harvesting method for poplar. It encompassed all the costs associated with establishing dedicated poplar plantations, including land rent, site preparation, planting, fertilizers, and herbicides. Additionally, we performed a sensitivity analysis to evaluate how different costs can affect the economic performance of the poplar cropping system. This analysis aimed to determine the minimum average delivered selling price for one metric ton of biomass necessary to achieve a desired rate of return over the cropping period. To inform the TEA, data on the establishment, crop care activities, and crop yields were derived from a field study conducted at the Mississippi Agricultural and Forestry Experiment Station's Bearden Dairy Research Center in Oktibbeha County and Pontotoc Ridge-Flatwood Branch Experiment Station in Pontotoc County.Keywords: biomass, populus species, sensitivity analysis, technoeconomic analysis
Procedia PDF Downloads 8319054 Counting People Utilizing Space-Time Imagery
Authors: Ahmed Elmarhomy, K. Terada
Abstract:
An automated method for counting passerby has been proposed using virtual-vertical measurement lines. Space-time image is representing the human regions which are treated using the segmentation process. Different color space has been used to perform the template matching. A proper template matching has been achieved to determine direction and speed of passing people. Distinguish one or two passersby has been investigated using a correlation between passerby speed and the human-pixel area. Finally, the effectiveness of the presented method has been experimentally verified.Keywords: counting people, measurement line, space-time image, segmentation, template matching
Procedia PDF Downloads 45319053 A Nonlinear Stochastic Differential Equation Model for Financial Bubbles and Crashes with Finite-Time Singularities
Authors: Haowen Xi
Abstract:
We propose and solve exactly a class of non-linear generalization of the Black-Scholes process of stochastic differential equations describing price bubble and crashes dynamics. As a result of nonlinear positive feedback, the faster-than-exponential price positive growth (bubble forming) and negative price growth (crash forming) are found to be the power-law finite-time singularity in which bubbles and crashes price formation ending at finite critical time tc. While most literature on the market bubble and crash process focuses on the nonlinear positive feedback mechanism aspect, very few studies concern the noise level on the same process. The present work adds to the market bubble and crashes literature by studying the external sources noise influence on the critical time tc of the bubble forming and crashes forming. Two main results will be discussed: (1) the analytical expression of expected value of the critical timeKeywords: bubble, crash, finite-time-singular, numerical simulation, price dynamics, stochastic differential equations
Procedia PDF Downloads 13319052 Numerical Methods versus Bjerksund and Stensland Approximations for American Options Pricing
Authors: Marasovic Branka, Aljinovic Zdravka, Poklepovic Tea
Abstract:
Numerical methods like binomial and trinomial trees and finite difference methods can be used to price a wide range of options contracts for which there are no known analytical solutions. American options are the most famous of that kind of options. Besides numerical methods, American options can be valued with the approximation formulas, like Bjerksund-Stensland formulas from 1993 and 2002. When the value of American option is approximated by Bjerksund-Stensland formulas, the computer time spent to carry out that calculation is very short. The computer time spent using numerical methods can vary from less than one second to several minutes or even hours. However to be able to conduct a comparative analysis of numerical methods and Bjerksund-Stensland formulas, we will limit computer calculation time of numerical method to less than one second. Therefore, we ask the question: Which method will be most accurate at nearly the same computer calculation time?Keywords: Bjerksund and Stensland approximations, computational analysis, finance, options pricing, numerical methods
Procedia PDF Downloads 45719051 Localization of Geospatial Events and Hoax Prediction in the UFO Database
Authors: Harish Krishnamurthy, Anna Lafontant, Ren Yi
Abstract:
Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at the National UFO Report Center (NUFORC). Some of these reports are a hoax and among those that seem legitimate, our task is not to establish that these events confirm that they indeed are events related to flying objects from aliens in outer space. Rather, we intend to identify if the report was a hoax as was identified by the UFO database team with their existing curation criterion. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about the proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and correlate with real-time events where one of the UFO reports strongly correlates to a missile test conducted in the United States.Keywords: time-series clustering, feature extraction, hoax prediction, geospatial events
Procedia PDF Downloads 37819050 Magnetohydrodynamic Couette Flow of Fractional Burger’s Fluid in an Annulus
Abstract:
Burgers’ fluid with a fractional derivatives model in an annulus was analyzed. Combining appropriately the basic equations, with the fractionalized fractional Burger’s fluid model allow us to determine the velocity field, temperature and shear stress. The governing partial differential equation was solved using the combine Laplace transformation method and Riemann sum approximation to give velocity field, temperature and shear stress on the fluid flow. The influence of various parameters like fractional parameters, relaxation time and retardation time, are drawn. The results obtained are simulated using Mathcad software and presented graphically. From the graphical results, we observed that the relaxation time and time helps the flow pattern, on the other hand, other material constants resist the fluid flow while fractional parameters effect on fluid flow is opposite to each other.Keywords: sani isa, Ali musaburger’s fluid, Laplace transform, fractional derivatives, annulus
Procedia PDF Downloads 2819049 Improved Soil and Snow Treatment with the Rapid Update Cycle Land-Surface Model for Regional and Global Weather Predictions
Authors: Tatiana G. Smirnova, Stan G. Benjamin
Abstract:
Rapid Update Cycle (RUC) land surface model (LSM) was a land-surface component in several generations of operational weather prediction models at the National Center for Environment Prediction (NCEP) at the National Oceanic and Atmospheric Administration (NOAA). It was designed for short-range weather predictions with an emphasis on severe weather and originally was intentionally simple to avoid uncertainties from poorly known parameters. Nevertheless, the RUC LSM, when coupled with the hourly-assimilating atmospheric model, can produce a realistic evolution of time-varying soil moisture and temperature, as well as the evolution of snow cover on the ground surface. This result is possible only if the soil/vegetation/snow component of the coupled weather prediction model has sufficient skill to avoid long-term drift. RUC LSM was first implemented in the operational NCEP Rapid Update Cycle (RUC) weather model in 1998 and later in the Weather Research Forecasting Model (WRF)-based Rapid Refresh (RAP) and High-resolution Rapid Refresh (HRRR). Being available to the international WRF community, it was implemented in operational weather models in Austria, New Zealand, and Switzerland. Based on the feedback from the US weather service offices and the international WRF community and also based on our own validation, RUC LSM has matured over the years. Also, a sea-ice module was added to RUC LSM for surface predictions over the Arctic sea-ice. Other modifications include refinements to the snow model and a more accurate specification of albedo, roughness length, and other surface properties. At present, RUC LSM is being tested in the regional application of the Unified Forecast System (UFS). The next generation UFS-based regional Rapid Refresh FV3 Standalone (RRFS) model will replace operational RAP and HRRR at NCEP. Over time, RUC LSM participated in several international model intercomparison projects to verify its skill using observed atmospheric forcing. The ESM-SnowMIP was the last of these experiments focused on the verification of snow models for open and forested regions. The simulations were performed for ten sites located in different climatic zones of the world forced with observed atmospheric conditions. While most of the 26 participating models have more sophisticated snow parameterizations than in RUC, RUC LSM got a high ranking in simulations of both snow water equivalent and surface temperature. However, ESM-SnowMIP experiment also revealed some issues in the RUC snow model, which will be addressed in this paper. One of them is the treatment of grid cells partially covered with snow. RUC snow module computes energy and moisture budgets of snow-covered and snow-free areas separately by aggregating the solutions at the end of each time step. Such treatment elevates the importance of computing in the model snow cover fraction. Improvements to the original simplistic threshold-based approach have been implemented and tested both offline and in the coupled weather model. The detailed description of changes to the snow cover fraction and other modifications to RUC soil and snow parameterizations will be described in this paper.Keywords: land-surface models, weather prediction, hydrology, boundary-layer processes
Procedia PDF Downloads 8919048 Drop Impact Study on Flexible Superhydrophobic Surface Containing Micro-Nano Hierarchical Structures
Authors: Abinash Tripathy, Girish Muralidharan, Amitava Pramanik, Prosenjit Sen
Abstract:
Superhydrophobic surfaces are abundant in nature. Several surfaces such as wings of butterfly, legs of water strider, feet of gecko and the lotus leaf show extreme water repellence behaviour. Self-cleaning, stain-free fabrics, spill-resistant protective wears, drag reduction in micro-fluidic devices etc. are few applications of superhydrophobic surfaces. In order to design robust superhydrophobic surface, it is important to understand the interaction of water with superhydrophobic surface textures. In this work, we report a simple coating method for creating large-scale flexible superhydrophobic paper surface. The surface consists of multiple layers of silanized zirconia microparticles decorated with zirconia nanoparticles. Water contact angle as high as 159±10 and contact angle hysteresis less than 80 was observed. Drop impact studies on superhydrophobic paper surface were carried out by impinging water droplet and capturing its dynamics through high speed imaging. During the drop impact, the Weber number was varied from 20 to 80 by altering the impact velocity of the drop and the parameters such as contact time, normalized spread diameter were obtained. In contrast to earlier literature reports, we observed contact time to be dependent on impact velocity on superhydrophobic surface. Total contact time was split into two components as spread time and recoil time. The recoil time was found to be dependent on the impact velocity while the spread time on the surface did not show much variation with the impact velocity. Further, normalized spreading parameter was found to increase with increase in impact velocity.Keywords: contact angle, contact angle hysteresis, contact time, superhydrophobic
Procedia PDF Downloads 42719047 Impact of Belongingness, Relational Communication, Religiosity and Screen Time of College Student Levels of Anxiety
Authors: Cherri Kelly Seese, Renee Bourdeaux, Sarah Drivdahl
Abstract:
Emergent adults in the United States are currently experiencing high levels of anxiety. It is imperative to uncover insulating factors which mitigate the impact of anxiety. This study aims to explore how constructs such as belongingness, relational communication, screen time and religiosity impact anxiety levels of emerging adults. Approximately 250 college students from a small, private university on the West Coast were given an online assessment that included: the General Belongingness Scale, Relational Communication Scale, Duke University Religion Index (DUREL), a survey of screen time, and the Beck Anxiety Inventory. A MANOVA statistical test was conducted by assessing the effects of multiple dependent variables (scores on GBS, RCS, self-reported screen time and DUREL) on the four different levels of anxiety as measured on the BAI (minimal = 1, mild =2, moderate = 3, or severe = 4). Results indicated a significant relationship between one’s sense of belonging and one’s reported level of anxiety. These findings have implications for systems, like universities, churches, and corporations that want to improve young adults’ level of anxiety.Keywords: anxiety, belongingness, relational communication, religiosity, screen time
Procedia PDF Downloads 17519046 Performance Improvement of Cooperative Scheme in Wireless OFDM Systems
Authors: Ki-Ro Kim, Seung-Jun Yu, Hyoung-Kyu Song
Abstract:
Recently, the wireless communication systems are required to have high quality and provide high bit rate data services. Researchers have studied various multiple antenna scheme to meet the demand. In practical application, it is difficult to deploy multiple antennas for limited size and cost. Cooperative diversity techniques are proposed to overcome the limitations. Cooperative communications have been widely investigated to improve performance of wireless communication. Among diversity schemes, space-time block code has been widely studied for cooperative communication systems. In this paper, we propose a new cooperative scheme using pre-coding and space-time block code. The proposed cooperative scheme provides improved error performance than a conventional cooperative scheme using space-time block coding scheme.Keywords: cooperative communication, space-time block coding, pre-coding
Procedia PDF Downloads 36019045 A Trapezoidal-Like Integrator for the Numerical Solution of One-Dimensional Time Dependent Schrödinger Equation
Authors: Johnson Oladele Fatokun, I. P. Akpan
Abstract:
In this paper, the one-dimensional time dependent Schrödinger equation is discretized by the method of lines using a second order finite difference approximation to replace the second order spatial derivative. The evolving system of stiff ordinary differential equation (ODE) in time is solved numerically by an L-stable trapezoidal-like integrator. Results show accuracy of relative maximum error of order 10-4 in the interval of consideration. The performance of the method as compared to an existing scheme is considered favorable.Keywords: Schrodinger’s equation, partial differential equations, method of lines (MOL), stiff ODE, trapezoidal-like integrator
Procedia PDF Downloads 41819044 Effect of Aging Time and Mass Concentration on the Rheological Behavior of Vase of Dam
Authors: Hammadi Larbi
Abstract:
Water erosion, the main cause of the siltation of a dam, is a natural phenomenon governed by natural physical factors such as aggressiveness, climate change, topography, lithology, and vegetation cover. Currently, a vase from certain dams is released downstream of the dikes during devastation by hydraulic means. The vases are characterized by complex rheological behaviors: rheofluidification, yield stress, plasticity, and thixotropy. In this work, we studied the effect of the aging time of the vase in the dam and the mass concentration of the vase on the flow behavior of a vase from the Fergoug dam located in the Mascara region. In order to test the reproducibility of results, two replicates were performed for most of the experiments. The flow behavior of the vase studied as a function of storage time and mass concentration is analyzed by the Herschel Bulkey model. The increase in the aging time of the vase in the dam causes an increase in the yield stress and the consistency index of the vase. This phenomenon can be explained by the adsorption of the water by the vase and the increase in volume by swelling, which modifies the rheological parameters of the vase. The increase in the mass concentration in the vase leads to an increase in the yield stress and the consistency index as a function of the concentration. This behavior could be explained by interactions between the granules of the vase suspension. On the other hand, the increase in the aging time and the mass concentration of the vase in the dam causes a reduction in the flow index of the vase. The study also showed an exponential decrease in apparent viscosity with the increase in the aging time of the vase in the dam. If a vase is allowed to age long enough for the yield stress to be close to infinity, its apparent viscosity is also close to infinity; then the apparent viscosity also tends towards infinity; this can, for example, subsequently pose problems when dredging dams. For good dam management, it could be then deduced to reduce the dredging time of the dams as much as possible.Keywords: vase of dam, aging time, rheological behavior, yield stress, apparent viscosity, thixotropy
Procedia PDF Downloads 3119043 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem
Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee
Abstract:
Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research
Procedia PDF Downloads 33619042 Travel Delay and Modal Split Analysis: A Case Study
Authors: H. S. Sathish, H. S. Jagadeesh, Skanda Kumar
Abstract:
Journey time and delay study is used to evaluate the quality of service, the travel time and study can also be used to evaluate the quality of traffic movement along the route and to determine the location types and extent of traffic delays. Components of delay are boarding and alighting, issue of tickets, other causes and distance between each stops. This study investigates the total journey time required to travel along the stretch and the influence the delays. The route starts from Kempegowda Bus Station to Yelahanka Satellite Station of Bangalore City. The length of the stretch is 16.5 km. Modal split analysis has been done for this stretch. This stretch has elevated highway connecting to Bangalore International Airport and the extension of metro transit stretch. From the regression analysis of total journey time it is affected by delay due to boarding and alighting moderately, Delay due to issue of tickets affects the journey time to a higher extent. Some of the delay factors affecting significantly the journey time are evident from F-test at 10 percent level of confidence. Along this stretch work trips are more prevalent as indicated by O-D study. Modal shift analysis indicates about 70 percent of commuters are ready to shift from current system to Metro Rail System. Metro Rail System carries maximum number of trips compared to private mode. Hence Metro is a highly viable choice of mode for Bangalore Metropolitan City.Keywords: delay, journey time, modal choice, regression analysis
Procedia PDF Downloads 49719041 Indoor Robot Positioning with Precise Correlation Computations over Walsh-Coded Lightwave Signal Sequences
Authors: Jen-Fa Huang, Yu-Wei Chiu, Jhe-Ren Cheng
Abstract:
Visible light communication (VLC) technique has become useful method via LED light blinking. Several issues on indoor mobile robot positioning with LED blinking are examined in the paper. In the transmitter, we control the transceivers blinking message. Orthogonal Walsh codes are adopted for such purpose on auto-correlation function (ACF) to detect signal sequences. In the robot receiver, we set the frame of time by 1 ns passing signal from the transceiver to the mobile robot. After going through many periods of time detecting the peak value of ACF in the mobile robot. Moreover, the transceiver transmits signal again immediately. By capturing three times of peak value, we can know the time difference of arrival (TDOA) between two peak value intervals and finally analyze the accuracy of the robot position.Keywords: Visible Light Communication, Auto-Correlation Function (ACF), peak value of ACF, Time difference of Arrival (TDOA)
Procedia PDF Downloads 32619040 Strategy Management of Soybean (Glycine max L.) for Dealing with Extreme Climate through the Use of Cropsyst Model
Authors: Aminah Muchdar, Nuraeni, Eddy
Abstract:
The aims of the research are: (1) to verify the cropsyst plant model of experimental data in the field of soybean plants and (2) to predict planting time and potential yield soybean plant with the use of cropsyst model. This research is divided into several stages: (1) first calibration stage which conducted in the field from June until September 2015.(2) application models stage, where the data obtained from calibration in the field will be included in cropsyst models. The required data models are climate data, ground data/soil data,also crop genetic data. The relationship between the obtained result in field with simulation cropsyst model indicated by Efficiency Index (EF) which the value is 0,939.That is showing that cropsyst model is well used. From the calculation result RRMSE which the value is 1,922%.That is showing that comparative fault prediction results from simulation with result obtained in the field is 1,92%. The conclusion has obtained that the prediction of soybean planting time cropsyst based models that have been made valid for use. and the appropriate planting time for planting soybeans mainly on rain-fed land is at the end of the rainy season, in which the above study first planting time (June 2, 2015) which gives the highest production, because at that time there was still some rain. Tanggamus varieties more resistant to slow planting time cause the percentage decrease in the yield of each decade is lower than the average of all varieties.Keywords: soybean, Cropsyst, calibration, efficiency Index, RRMSE
Procedia PDF Downloads 18219039 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 12719038 Black-Hole Dimension: A Distinct Methodology of Understanding Time, Space and Data in Architecture
Authors: Alp Arda
Abstract:
Inspired by Nolan's ‘Interstellar’, this paper delves into speculative architecture, asking, ‘What if an architect could traverse time to study a city?’ It unveils the ‘Black-Hole Dimension,’ a groundbreaking concept that redefines urban identities beyond traditional boundaries. Moving past linear time narratives, this approach draws from the gravitational dynamics of black holes to enrich our understanding of urban and architectural progress. By envisioning cities and structures as influenced by black hole-like forces, it enables an in-depth examination of their evolution through time and space. The Black-Hole Dimension promotes a temporal exploration of architecture, treating spaces as narratives of their current state interwoven with historical layers. It advocates for viewing architectural development as a continuous, interconnected journey molded by cultural, economic, and technological shifts. This approach not only deepens our understanding of urban evolution but also empowers architects and urban planners to create designs that are both adaptable and resilient. Echoing themes from popular culture and science fiction, this methodology integrates the captivating dynamics of time and space into architectural analysis, challenging established design conventions. The Black-Hole Dimension champions a philosophy that welcomes unpredictability and complexity, thereby fostering innovation in design. In essence, the Black-Hole Dimension revolutionizes architectural thought by emphasizing space-time as a fundamental dimension. It reimagines our built environments as vibrant, evolving entities shaped by the relentless forces of time, space, and data. This groundbreaking approach heralds a future in architecture where the complexity of reality is acknowledged and embraced, leading to the creation of spaces that are both responsive to their temporal context and resilient against the unfolding tapestry of time.Keywords: black-hole, timeline, urbanism, space and time, speculative architecture
Procedia PDF Downloads 7319037 Experimental Study and Numerical Modelling of Failure of Rocks Typical for Kuzbass Coal Basin
Authors: Mikhail O. Eremin
Abstract:
Present work is devoted to experimental study and numerical modelling of failure of rocks typical for Kuzbass coal basin (Russia). The main goal was to define strength and deformation characteristics of rocks on the base of uniaxial compression and three-point bending loadings and then to build a mathematical model of failure process for both types of loading. Depending on particular physical-mechanical characteristics typical rocks of Kuzbass coal basin (sandstones, siltstones, mudstones, etc. of different series – Kolchuginsk, Tarbagansk, Balohonsk) manifest brittle and quasi-brittle character of failure. The strength characteristics for both tension and compression are found. Other characteristics are also found from the experiment or taken from literature reviews. On the base of obtained characteristics and structure (obtained from microscopy) the mathematical and structural models are built and numerical modelling of failure under different types of loading is carried out. Effective characteristics obtained from modelling and character of failure correspond to experiment and thus, the mathematical model was verified. An Instron 1185 machine was used to carry out the experiments. Mathematical model includes fundamental conservation laws of solid mechanics – mass, impulse, energy. Each rock has a sufficiently anisotropic structure, however, each crystallite might be considered as isotropic and then a whole rock model has a quasi-isotropic structure. This idea gives an opportunity to use the Hooke’s law inside of each crystallite and thus explicitly accounting for the anisotropy of rocks and the stress-strain state at loading. Inelastic behavior is described in frameworks of two different models: von Mises yield criterion and modified Drucker-Prager yield criterion. The damage accumulation theory is also implemented in order to describe a failure process. Obtained effective characteristics of rocks are used then for modelling of rock mass evolution when mining is carried out both by an open-pit or underground opening.Keywords: damage accumulation, Drucker-Prager yield criterion, failure, mathematical modelling, three-point bending, uniaxial compression
Procedia PDF Downloads 17619036 Changes in Kidney Tissue at Postmortem Magnetic Resonance Imaging Depending on the Time of Fetal Death
Authors: Uliana N. Tumanova, Viacheslav M. Lyapin, Vladimir G. Bychenko, Alexandr I. Shchegolev, Gennady T. Sukhikh
Abstract:
All cases of stillbirth undoubtedly subject to postmortem examination, since it is necessary to find out the cause of the stillbirths, as well as a forecast of future pregnancies and their outcomes. Determination of the time of death is an important issue which is addressed during the examination of the body of a stillborn. It is mean the period from the time of death until the birth of the fetus. The time for fetal deaths determination is based on the assessment of the severity of the processes of maceration. To study the possibilities of postmortem magnetic resonance imaging (MRI) for determining the time of intrauterine fetal death based on the evaluation of maceration in the kidney. We have conducted MRI morphological comparisons of 7 dead fetuses (18-21 gestational weeks) and 26 stillbirths (22-39 gestational weeks), and 15 bodies of died newborns at the age of 2 hours – 36 days. Postmortem MRI 3T was performed before the autopsy. The signal intensity of the kidney tissue (SIK), pleural fluid (SIF), external air (SIA) was determined on T1-WI and T2-WI. Macroscopic and histological signs of maceration severity and time of death were evaluated in the autopsy. Based on the results of the morphological study, the degree of maceration varied from 0 to 4. In 13 cases, the time of intrauterine death was up to 6 hours, in 2 cases - 6-12 hours, in 4 -12-24 hours, in 9 -2-3 days, in 3 -1 week, in 2 -1,5-2 weeks. At 15 dead newborns, signs of maceration were absent, naturally. Based on the data from SIK, SIF, SIA on MR-tomograms, we calculated the coefficient of MR-maceration (M). The calculation of the time of intrauterine death (MP-t) (hours) was performed by our formula: МR-t = 16,87+95,38×М²-75,32×М. A direct positive correlation of MR-t and autopsy data from the dead at the gestational ages 22-40 weeks, with a dead time, not more than 1 week, was received. The maceration at the antenatal fetal death is characterized by changes in T1-WI and T2-WI signals at postmortem MRI. The calculation of MP-t allows defining accurately the time of intrauterine death within one week at the stillbirths who died on 22-40 gestational weeks. Thus, our study convincingly demonstrates that radiological methods can be used for postmortem study of the bodies, in particular, the bodies of stillborn to determine the time of intrauterine death. Postmortem MRI allows for an objective and sufficiently accurate analysis of pathological processes with the possibility of their documentation, storage, and analysis after the burial of the body.Keywords: intrauterine death, maceration, postmortem MRI, stillborn
Procedia PDF Downloads 12619035 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 29819034 Support for Planning of Mobile Personnel Tasks by Solving Time-Dependent Routing Problems
Authors: Wlodzimierz Ogryczak, Tomasz Sliwinski, Jaroslaw Hurkala, Mariusz Kaleta, Bartosz Kozlowski, Piotr Palka
Abstract:
Implementation concepts of a decision support system for planning and management of mobile personnel tasks (sales representatives and others) are discussed. Large-scale periodic time-dependent vehicle routing and scheduling problems with complex constraints are solved for this purpose. Complex nonuniform constraints with respect to frequency, time windows, working time, etc. are taken into account with additional fast adaptive procedures for operational rescheduling of plans in the presence of various disturbances. Five individual solution quality indicators with respect to a single personnel person are considered. This paper deals with modeling issues corresponding to the problem and general solution concepts. The research was supported by the European Union through the European Regional Development Fund under the Operational Programme ‘Innovative Economy’ for the years 2007-2013; Priority 1 Research and development of modern technologies under the project POIG.01.03.01-14-076/12: 'Decision Support System for Large-Scale Periodic Vehicle Routing and Scheduling Problems with Complex Constraints.'Keywords: mobile personnel management, multiple criteria, time dependent, time windows, vehicle routing and scheduling
Procedia PDF Downloads 323