Search results for: stochastic signals
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1438

Search results for: stochastic signals

388 Reallocation of Bed Capacity in a Hospital Combining Discrete Event Simulation and Integer Linear Programming

Authors: Muhammed Ordu, Eren Demir, Chris Tofallis

Abstract:

The number of inpatient admissions in the UK has been significantly increasing over the past decade. These increases cause bed occupancy rates to exceed the target level (85%) set by the Department of Health in England. Therefore, hospital service managers are struggling to better manage key resource such as beds. On the other hand, this severe demand pressure might lead to confusion in wards. For example, patients can be admitted to the ward of another inpatient specialty due to lack of resources (i.e., bed). This study aims to develop a simulation-optimization model to reallocate the available number of beds in a mid-sized hospital in the UK. A hospital simulation model was developed to capture the stochastic behaviours of the hospital by taking into account the accident and emergency department, all outpatient and inpatient services, and the interactions between each other. A couple of outputs of the simulation model (e.g., average length of stay and revenue) were generated as inputs to be used in the optimization model. An integer linear programming was developed under a number of constraints (financial, demand, target level of bed occupancy rate and staffing level) with the aims of maximizing number of admitted patients. In addition, a sensitivity analysis was carried out by taking into account unexpected increases on inpatient demand over the next 12 months. As a result, the major findings of the approach proposed in this study optimally reallocate the available number of beds for each inpatient speciality and reveal that 74 beds are idle. In addition, the findings of the study indicate that the hospital wards will be able to cope with 14% demand increase at most in the projected year. In conclusion, this paper sheds a new light on how best to reallocate beds in order to cope with current and future demand for healthcare services.

Keywords: bed occupancy rate, bed reallocation, discrete event simulation, inpatient admissions, integer linear programming, projected usage

Procedia PDF Downloads 144
387 The Capital Expenditure Reputation from Investor Perspective: A Signal of Better Future Performance

Authors: Juniarti, Agus Arianto Toly

Abstract:

This study aims to examine the effect of capital expenditure on the investors’ responses. The respondents were companies with the best stock performance in each sector in 2017. The observation period is 2017 to 2019. Top 10 companies in each sector with the best stock performance in companies listed on the Indonesia Stock Exchange were selected. The main variables are a growth signal which is proxied by growth in capital spending and capital expenditure, and risk and investor response, which is proxied by CAR. Financial performance as measured by ROA is a control variable in this study. The results showed that the signal of growth as measured by capital expenditures responded positively by the market, the risk moderates this influence, companies with high risk will be responded negatively by investors and vice versa. This finding corrects previous findings that only looked at the signal aspect of growth, without linking it to risk. In addition, these findings reinforce the argument that investors buy the future of the company, not a momentary financial performance. This can be seen from the absence of ROA influence on investor response. This study found that companies need to manage risk appropriately, because the risk aspect of the company is a crucial factor for investors. High risks will eliminate the benefits of strategic decisions in this case in the form of capital expenditures.

Keywords: capital expenditure, growth signals, investor response, risk

Procedia PDF Downloads 141
386 A Distributed Mobile Agent Based on Intrusion Detection System for MANET

Authors: Maad Kamal Al-Anni

Abstract:

This study is about an algorithmic dependence of Artificial Neural Network on Multilayer Perceptron (MPL) pertaining to the classification and clustering presentations for Mobile Adhoc Network vulnerabilities. Moreover, mobile ad hoc network (MANET) is ubiquitous intelligent internetworking devices in which it has the ability to detect their environment using an autonomous system of mobile nodes that are connected via wireless links. Security affairs are the most important subject in MANET due to the easy penetrative scenarios occurred in such an auto configuration network. One of the powerful techniques used for inspecting the network packets is Intrusion Detection System (IDS); in this article, we are going to show the effectiveness of artificial neural networks used as a machine learning along with stochastic approach (information gain) to classify the malicious behaviors in simulated network with respect to different IDS techniques. The monitoring agent is responsible for detection inference engine, the audit data is collected from collecting agent by simulating the node attack and contrasted outputs with normal behaviors of the framework, whenever. In the event that there is any deviation from the ordinary behaviors then the monitoring agent is considered this event as an attack , in this article we are going to demonstrate the  signature-based IDS approach in a MANET by implementing the back propagation algorithm over ensemble-based Traffic Table (TT), thus the signature of malicious behaviors or undesirable activities are often significantly prognosticated and efficiently figured out, by increasing the parametric set-up of Back propagation algorithm during the experimental results which empirically shown its effectiveness  for the ratio of detection index up to 98.6 percentage. Consequently it is proved in empirical results in this article, the performance matrices are also being included in this article with Xgraph screen show by different through puts like Packet Delivery Ratio (PDR), Through Put(TP), and Average Delay(AD).

Keywords: Intrusion Detection System (IDS), Mobile Adhoc Networks (MANET), Back Propagation Algorithm (BPA), Neural Networks (NN)

Procedia PDF Downloads 194
385 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 74
384 Fault Detection and Isolation in Sensors and Actuators of Wind Turbines

Authors: Shahrokh Barati, Reza Ramezani

Abstract:

Due to the countries growing attention to the renewable energy producing, the demand for energy from renewable energy has gone up among the renewable energy sources; wind energy is the fastest growth in recent years. In this regard, in order to increase the availability of wind turbines, using of Fault Detection and Isolation (FDI) system is necessary. Wind turbines include of various faults such as sensors fault, actuator faults, network connection fault, mechanical faults and faults in the generator subsystem. Although, sensors and actuators have a large number of faults in wind turbine but have discussed fewer in the literature. Therefore, in this work, we focus our attention to design a sensor and actuator fault detection and isolation algorithm and Fault-tolerant control systems (FTCS) for Wind Turbine. The aim of this research is to propose a comprehensive fault detection and isolation system for sensors and actuators of wind turbine based on data-driven approaches. To achieve this goal, the features of measurable signals in real wind turbine extract in any condition. The next step is the feature selection among the extract in any condition. The next step is the feature selection among the extracted features. Features are selected that led to maximum separation networks that implemented in parallel and results of classifiers fused together. In order to maximize the reliability of decision on fault, the property of fault repeatability is used.

Keywords: FDI, wind turbines, sensors and actuators faults, renewable energy

Procedia PDF Downloads 400
383 Studying the Dynamical Response of Nano-Microelectromechanical Devices for Nanomechanical Testing of Nanostructures

Authors: Mohammad Reza Zamani Kouhpanji

Abstract:

Characterizing the fatigue and fracture properties of nanostructures is one of the most challenging tasks in nanoscience and nanotechnology due to lack of a MEMS/NEMS device for generating uniform cyclic loadings at high frequencies. Here, the dynamic response of a recently proposed MEMS/NEMS device under different inputs signals is completely investigated. This MEMS/NEMS device is designed and modeled based on the electromagnetic force induced between paired parallel wires carrying electrical currents, known as Ampere’s Force Law (AFL). Since this MEMS/NEMS device only uses two paired wires for actuation part and sensing part, it represents highly sensitive and linear response for nanostructures with any stiffness and shapes (single or arrays of nanowires, nanotubes, nanosheets or nanowalls). In addition to studying the maximum gains at different resonance frequencies of the MEMS/NEMS device, its dynamical responses are investigated for different inputs and nanostructure properties to demonstrate the capability, usability, and reliability of the device for wide range of nanostructures. This MEMS/NEMS device can be readily integrated into SEM/TEM instruments to provide real time study of the fatigue and fracture properties of nanostructures as well as their softening or hardening behaviors, and initiation and/or propagation of nanocracks in them.

Keywords: MEMS/NEMS devices, paired wire actuators and sensors, dynamical response, fatigue and fracture characterization, Ampere’s force law

Procedia PDF Downloads 399
382 Modelling Causal Effects from Complex Longitudinal Data via Point Effects of Treatments

Authors: Xiaoqin Wang, Li Yin

Abstract:

Background and purpose: In many practices, one estimates causal effects arising from a complex stochastic process, where a sequence of treatments are assigned to influence a certain outcome of interest, and there exist time-dependent covariates between treatments. When covariates are plentiful and/or continuous, statistical modeling is needed to reduce the huge dimensionality of the problem and allow for the estimation of causal effects. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to conduct the modeling via point effects. The purpose of the work is to study the modeling of these causal effects via point effects. Challenges and solutions: The time-dependent covariates often have influences from earlier treatments as well as on subsequent treatments. Consequently, the standard parameters – i.e., the mean of the outcome given all treatments and covariates-- are essentially all different (null paradox). Furthermore, the dimension of the parameters is huge (curse of dimensionality). Therefore, it can be difficult to conduct the modeling in terms of standard parameters. Instead of standard parameters, we have use point effects of treatments to develop likelihood-based parametric approach to the modeling of these causal effects and are able to model the causal effects of a sequence of treatments by modeling a small number of point effects of individual treatment Achievements: We are able to conduct the modeling of the causal effects from a sequence of treatments in the familiar framework of single-point causal inference. The simulation shows that our method achieves not only an unbiased estimate for the causal effect but also the nominal level of type I error and a low level of type II error for the hypothesis testing. We have applied this method to a longitudinal study of COVID-19 mortality among Scandinavian countries and found that the Swedish approach performed far worse than the other countries' approach for COVID-19 mortality and the poor performance was largely due to its early measure during the initial period of the pandemic.

Keywords: causal effect, point effect, statistical modelling, sequential causal inference

Procedia PDF Downloads 205
381 The Disruptive Effect of COVID-19 on the Informativeness of Dividend Increases: Some Evidence from Johannesburg Stock Exchange-Listed Companies

Authors: Faustina Masocha

Abstract:

This study sought to determine if the Covid-19 pandemic played a disruptive role in the signalling effect of dividend increases for the Top 40 companies listed on the Johannesburg Stock Exchange. With the use of Event Study Methodologies, it was found that dividend increases that were announced in the 2018 and 2019 financial years resulted in Cumulative Abnormal Returns (CARs) that were significantly different from zero, as confirmed by a p-value of 0,0300. This resulted in the conclusion that, under normal circumstances, dividend increases follow the precepts outlined in signalling theories which indicate that the announcement of dividend increases sent positive signals about the expected financial performance of a company. To prove the notion that Covid-19 plays a disruptive role on the signalling hypothesis, it was found from both parametric and non-parametric tests of significance that CARs related to dividend increases that were announced during the 2020 and 2021 financial years, when the Covid-19 pandemic was at its peak, were not significantly different from zero. Therefore, although the dividend increases still resulted in some CARs, such CARs were not statistically different from zero to confirm the signalling hypothesis. A p-value of 0.9830 from parametric t-tests and a p-value of 0.8971 from the Wilcoxon signed-rank test were used as a gauge that led to the conclusion that Covid-19 plays a disruptive effect on the signalling process of dividend increases.

Keywords: cumulative abnormal returns, dividend increases, event study methodology, signalling

Procedia PDF Downloads 120
380 Instant Location Detection of Objects Moving at High Speed in C-OTDR Monitoring Systems

Authors: Andrey V. Timofeev

Abstract:

The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data off the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as 'signaling parameters' (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of C-OTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as a rule. This report contains describing the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.

Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems

Procedia PDF Downloads 470
379 Prediction of Music Track Popularity: A Machine Learning Approach

Authors: Syed Atif Hassan, Luv Mehta, Syed Asif Hassan

Abstract:

Hit song science is a field of investigation wherein machine learning techniques are applied to music tracks in order to extract such features from audio signals which can capture information that could explain the popularity of respective tracks. Record companies invest huge amounts of money into recruiting fresh talents and churning out new music each year. Gaining insight into the basis of why a song becomes popular will result in tremendous benefits for the music industry. This paper aims to extract basic musical and more advanced, acoustic features from songs while also taking into account external factors that play a role in making a particular song popular. We use a dataset derived from popular Spotify playlists divided by genre. We use ten genres (blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, rock), chosen on the basis of clear to ambiguous delineation in the typical sound of their genres. We feed these features into three different classifiers, namely, SVM with RBF kernel, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model at the end. Predicting song popularity is particularly important for the music industry as it would allow record companies to produce better content for the masses resulting in a more competitive market.

Keywords: classifier, machine learning, music tracks, popularity, prediction

Procedia PDF Downloads 663
378 Analysis of the Elastic Energy Released and Characterization of the Eruptive Episodes Intensity’s during 2014-2015 at El Reventador Volcano, Ecuador

Authors: Paúl I. Cornejo

Abstract:

The elastic energy released through Strombolian explosions has been quite studied, detailing various processes, sources, and precursory events at several volcanoes. We realized an analysis based on the relative partitioning of the elastic energy radiated into the atmosphere and ground by Strombolian-type explosions recorded at El Reventador volcano, using infrasound and seismic signals at high and moderate seismicity episodes during intense eruptive stages of explosive and effusive activity. Our results show that considerable values of Volcano Acoustic-Seismic Ratio (VASR or η) are obtained at high seismicity stages. VASR is a physical diagnostic of explosive degassing that we used to compare eruption mechanisms at El Reventador volcano for two datasets of explosions recorded at a Broad-Band BB seismic and infrasonic station located at ~5 kilometers from the vent. We conclude that the acoustic energy EA released during explosive activity (VASR η = 0.47, standard deviation σ = 0.8) is higher than the EA released during effusive activity; therefore, producing the highest values of η. Furthermore, we realized the analysis and characterization of the eruptive intensity for two episodes at high seismicity, calculating a η three-time higher for an episode of effusive activity with an occasional explosive component (η = 0.32, and σ = 0.42), than a η for an episode of only effusive activity (η = 0.11, and σ = 0.18), but more energetic.

Keywords: effusive, explosion quakes, explosive, Strombolian, VASR

Procedia PDF Downloads 184
377 Effect of Far Infrared and Endothelial Cell Growth Supplement on Human Umbilical Vascular Endothelial Cells

Authors: Ming-Tzu Tsai, Jui-Ting Hsu, Chia-Chieh Lin, Feng-Tsai Chiang, Cheng-Chin Huang

Abstract:

Far infrared (FIR), an invisible and short electromagnetic waves ranges from 6-14 μm also defines as the “growth ray.” Although the mechanism of FIR is still unknown, most data have suggested that FIR could accelerate the skin microcirculation by elevating the blood flow and nitric-oxide (NO) synthesis. In this present work, the effect of FIR irradiation and endothelial cell growth supplement (ECGS) on human umbilical vascular endothelial cells (HUVECs) was evaluated. To understand whether the cell viability and NO production of HUVECs affected by NO, cells with/without ECGS were treated in the presence or absence of L-NAME, an eNOS inhibitor. For FIR exposure, FIR-emitted ceramic powders consisted of a variety of well-mixed metal oxides were developed. The results showed that L-NAME did had a strong effect on the inhibition of NO production, especially in the ECGS-treated group. However, the cell viability of each group was rarely affected in the presence of L-NAME. Cells with the incubation of ECGS showed much higher cell viability compared to the control. Moreover, NO production of HUVECs exposed to FIR irradiation was significantly inhibited in the presence of L-NAME. It suggested that NO could play a role modulating the downstream signals of HUVECs during FIR exposure.

Keywords: far-infrared irradiation (FIR), nitric oxide (NO), endothelial nitric oxide synthase (eNOS), endothelial cell growth supplement (ECGS)

Procedia PDF Downloads 429
376 Lennox-gastaut Syndrome Associated with Dysgenesis of Corpus Callosum

Authors: A. Bruce Janati, Muhammad Umair Khan, Naif Alghassab, Ibrahim Alzeir, Assem Mahmoud, M. Sammour

Abstract:

Rationale: Lennox-Gastaut syndrome(LGS) is an electro-clinical syndrome composed of the triad of mental retardation, multiple seizure types, and the characteristic generalized slow spike-wave complexes in the EEG. In this article, we report on two patients with LGS whose brain MRI showed dysgenesis of corpus callosum(CC). We review the literature and stress the role of CC in the genesis of secondary bilateral synchrony(SBS). Method: This was a clinical study conducted at King Khalid Hospital. Results: The EEG was consistent with LGS in patient 1 and unilateral slow spike-wave complexes in patient 2. The MRI showed hypoplasia of the splenium of CC in patient 1, and global hypoplasia of CC combined with Joubert syndrome in patient 2. Conclusion: Based on the data, we proffer the following hypotheses: 1-Hypoplasia of CC interferes with functional integrity of this structure. 2-The genu of CC plays a pivotal role in the genesis of secondary bilateral synchrony. 3-Electrodecremental seizures in LGS emanate from pacemakers generated in the brain stem, in particular the mesencephalon projecting abnormal signals to the cortex via thalamic nuclei. 4-Unilateral slow spike-wave complexes in the context of mental retardation and multiple seizure types may represent a variant of LGS, justifying neuroimaging studies.

Keywords: EEG, Lennox-Gastaut syndrome, corpus callosum , MRI

Procedia PDF Downloads 446
375 Implementation of Congestion Management Strategies on Arterial Roads: Case Study of Geelong

Authors: A. Das, L. Hitihamillage, S. Moridpour

Abstract:

Natural disasters are inevitable to the biodiversity. Disasters such as flood, tsunami and tornadoes could be brutal, harsh and devastating. In Australia, flooding is a major issue experienced by different parts of the country. In such crisis, delays in evacuation could decide the life and death of the people living in those regions. Congestion management could become a mammoth task if there are no steps taken before such situations. In the past to manage congestion in such circumstances, many strategies were utilised such as converting the road shoulders to extra lanes or changing the road geometry by adding more lanes. However, expansion of road to resolving congestion problems is not considered a viable option nowadays. The authorities avoid this option due to many reasons, such as lack of financial support and land space. They tend to focus their attention on optimising the current resources they possess and use traffic signals to overcome congestion problems. Traffic Signal Management strategy was considered a viable option, to alleviate congestion problems in the City of Geelong, Victoria. Arterial road with signalised intersections considered in this paper and the traffic data required for modelling collected from VicRoads. Traffic signalling software SIDRA used to model the roads, and the information gathered from VicRoads. In this paper, various signal parameters utilised to assess and improve the corridor performance to achieve the best possible Level of Services (LOS) for the arterial road.

Keywords: congestion, constraints, management, LOS

Procedia PDF Downloads 397
374 Environmental and Socioeconomic Determinants of Climate Change Resilience in Rural Nigeria: Empirical Evidence towards Resilience Building

Authors: Ignatius Madu

Abstract:

The study aims at assessing the environmental and socioeconomic determinants of climate change resilience in rural Nigeria. This is necessary because researches and development efforts on building climate change resilience of rural areas in developing countries are usually made without the knowledge of the impacts of the inherent rural characteristics that determine resilient capacities of the households. This has, in many cases, led to costly mistakes, delayed responses, inaccurate outcomes, and other difficulties. Consequently, this assessment becomes crucial not only to policymakers and people living in risk-prone environments in rural areas but also to fill the research gap. To achieve the aim, secondary data were obtained from the Annual Abstract of Statistics 2017, LSMS-Integrated Surveys on Agriculture and General Household Survey Panel 2015/2016, and National Agriculture Sample Survey (NASS), 2010/2011.Resilience was calculated by weighting and adding the adaptive, absorptive and anticipatory measures of households variables aggregated at state levels and then regressed against rural environmental and socioeconomic characteristics influencing it. From the regression, the coefficients of the variables were used to compute the impacts of the variables using the Stochastic Regression of Impacts on Population, Affluence and Technology (STIRPAT) Model. The results showed that the northern States are generally low in resilient indices and are impacted less by the development indicators. The major determining factors are percentage of non-poor, environmental protection, road transport development, landholding, agricultural input, population density, dependency ratio (inverse), household asserts, education and maternal care. The paper concludes that any effort to a successful resilient building in rural areas of the country should first address these key factors that enhance rural development and wellbeing since it is better to take action before shocks take place.

Keywords: climate change resilience; spatial impacts; STIRPAT model; Nigeria

Procedia PDF Downloads 150
373 A Micro-Scale of Electromechanical System Micro-Sensor Resonator Based on UNO-Microcontroller for Low Magnetic Field Detection

Authors: Waddah Abdelbagi Talha, Mohammed Abdullah Elmaleeh, John Ojur Dennis

Abstract:

This paper focuses on the simulation and implementation of a resonator micro-sensor for low magnetic field sensing based on a U-shaped cantilever and piezoresistive configuration, which works based on Lorentz force physical phenomena. The resonance frequency is an important parameter that depends upon the highest response and sensitivity through the frequency domain (frequency response) of any vibrated micro-scale of an electromechanical system (MEMS) device. And it is important to determine the direction of the detected magnetic field. The deflection of the cantilever is considered for vibrated mode with different frequencies in the range of (0 Hz to 7000 Hz); for the purpose of observing the frequency response. A simple electronic circuit-based polysilicon piezoresistors in Wheatstone's bridge configuration are used to transduce the response of the cantilever to electrical measurements at various voltages. Microcontroller-based Arduino program and PROTEUS electronic software are used to analyze the output signals from the sensor. The highest output voltage amplitude of about 4.7 mV is spotted at about 3 kHz of the frequency domain, indicating the highest sensitivity, which can be called resonant sensitivity. Based on the resonant frequency value, the mode of vibration is determined (up-down vibration), and based on that, the vector of the magnetic field is also determined.

Keywords: resonant frequency, sensitivity, Wheatstone bridge, UNO-microcontroller

Procedia PDF Downloads 127
372 The Optimal Order Policy for the Newsvendor Model under Worker Learning

Authors: Sunantha Teyarachakul

Abstract:

We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.

Keywords: inventory management, Newsvendor model, order policy, worker learning

Procedia PDF Downloads 416
371 Heat Transfer Process Parameter Optimization in SI/Ge Using TAGUCHI Method

Authors: Evln Ranga Charyulu, S. P. Venu Madhavarao, S. Udaya kumar, S. V. S. S. N. V. G. Krishna Murthy

Abstract:

With the advent of new nanometer process technologies, it is possible to integrate billion transistors on a single substrate. When more and more functionality included there is the possibility of multi-million transistors switching simultaneously consuming more power and dissipating more power along with more leakage of current into the substrate of porous silicon or germanium material. These results in substrate heating and thermal noise generation coupled to signals of interest. The heating process is represented by coupled nonlinear partial differential equations in porous silicon and germanium. By identifying heat sources and heat fluxes may results in designing of ultra-low power circuits. The PDEs are solved by finite difference scheme assuming that boundary layer equations in porous silicon and germanium. Local heat fluxes along the vertical isothermal surface immersed in porous SI/Ge are considered. The parameters considered for optimization are thermal diffusivity, thermal expansion coefficient, thermal diffusion ratio, permeability, specific heat at constant temperatures, Rayleigh number, amplitude of wavy surface, mass expansion coefficient. The diffusion of heat was caused by the concentration gradient. Thermal physical properties are homogeneous and isotropic. By using L8, TAGUCHI method the parameters are optimized.

Keywords: heat transfer, pde, taguchi optimization, SI/Ge

Procedia PDF Downloads 337
370 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories

Authors: Haj Najafi Leila, Tehranizadeh Mohsen

Abstract:

Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.

Keywords: dependency, story-cost, cost modes, engineering demand parameter

Procedia PDF Downloads 180
369 Continuous-Time Convertible Lease Pricing and Firm Value

Authors: Ons Triki, Fathi Abid

Abstract:

Along with the increase in the use of leasing contracts in corporate finance, multiple studies aim to model the credit risk of the lease in order to cover the losses of the lessor of the asset if the lessee goes bankrupt. In the current research paper, a convertible lease contract is elaborated in a continuous time stochastic universe aiming to ensure the financial stability of the firm and quickly recover the losses of the counterparties to the lease in case of default. This work examines the term structure of the lease rates taking into account the credit default risk and the capital structure of the firm. The interaction between the lessee's capital structure and the equilibrium lease rate has been assessed by applying the competitive lease market argument developed by Grenadier (1996) and the endogenous structural default model set forward by Leland and Toft (1996). The cumulative probability of default was calculated by referring to Leland and Toft (1996) and Yildirim and Huan (2006). Additionally, the link between lessee credit risk and lease rate was addressed so as to explore the impact of convertible lease financing on the term structure of the lease rate, the optimal leverage ratio, the cumulative default probability, and the optimal firm value by applying an endogenous conversion threshold. The numerical analysis is suggestive that the duration structure of lease rates increases with the increase in the degree of the market price of risk. The maximal value of the firm decreases with the effect of the optimal leverage ratio. The results are indicative that the cumulative probability of default increases with the maturity of the lease contract if the volatility of the asset service flows is significant. Introducing the convertible lease contract will increase the optimal value of the firm as a function of asset volatility for a high initial service flow level and a conversion ratio close to 1.

Keywords: convertible lease contract, lease rate, credit-risk, capital structure, default probability

Procedia PDF Downloads 98
368 Co-Articulation between Consonant and Vowel in Cantonese Syllables

Authors: Wai-Sum Lee

Abstract:

This study investigates C-V and V-C co-articulation in Cantonese monosyllables of the CV, VC or CVC structure, with C = one of the three stop consonants [p, t, k] and V = one of the three corner vowels [i, a, u]. Five repetitions of each test syllable on a randomized list were elicited from Cantonese young adult speakers in their early-20s. A research tool, EMA AG500, was used to record the synchronized audio signals and articulatory data at three different locations of the tongue – tongue tip, tongue middle, and tongue back – and the positions of the upper and lower lips during the test syllables. The main findings based on the articulatory data collected from two male Cantonese speakers are as follows: (i) For the syllable-initial [p-], strong co-articulation is observed when [p-] preceding the high vowel [i] or [u], but not the low vowel [a]. As for the syllable-final [-p], it is strongly co-articulated with the preceding vowel, even when the vowel is [a]. (ii) The co-articulation between the initial [t-] and the following vowel of any type is weak. In the syllable-final position, the degree of co-articulatory resistance of [-t] is also large when following the vowel [u], but [-t] is largely co-articulated with the preceding vowel when the vowel is [i] or [a]. (iii) The strength of co-articulation differs when the initial [k-] precedes the different types of vowel. A stronger co-articulation between [k-] and [i] than between [k-] and [u], and the strength of co-articulation is much reduced between [k-] and [a]. However, in the syllable-final position, there is strong co-articulation between [-k] and the preceding vowel [a]. (iv) Among the three types of stop consonants in the syllable-initial position, the decreasing degree of co-articulatory resistance (CR) is [t-] > [k-] > [p-], and the degree of CR is reduced during all three types of stop in the syllable-final position. In general, the data on co-articulation between consonant and vowel in the Cantonese monosyllables are similar to those in other languages reported in previous studies.

Keywords: Cantonese, co-articulation, consonant, vowel

Procedia PDF Downloads 247
367 Dynamic Analysis of the Heat Transfer in the Magnetically Assisted Reactor

Authors: Tomasz Borowski, Dawid Sołoducha, Rafał Rakoczy, Marian Kordas

Abstract:

The application of magnetic field is essential for a wide range of technologies or processes (i.e., magnetic hyperthermia, bioprocessing). From the practical point of view, bioprocess control is often limited to the regulation of temperature at constant values favourable to microbial growth. The main aim of this study is to determine the effect of various types of electromagnetic fields (i.e., static or alternating) on the heat transfer in a self-designed magnetically assisted reactor. The experimental set-up is equipped with a measuring instrument which controlled the temperature of the liquid inside the container and supervised the real-time acquisition of all the experimental data coming from the sensors. Temperature signals are also sampled from generator of magnetic field. The obtained temperature profiles were mathematically described and analyzed. The parameters characterizing the response to a step input of a first-order dynamic system were obtained and discussed. For example, the higher values of the time constant means slow signal (in this case, temperature) increase. After the period equal to about five-time constants, the sample temperature nearly reached the asymptotic value. This dynamical analysis allowed us to understand the heating effect under the action of various types of electromagnetic fields. Moreover, the proposed mathematical description can be used to compare the influence of different types of magnetic fields on heat transfer operations.

Keywords: heat transfer, magnetically assisted reactor, dynamical analysis, transient function

Procedia PDF Downloads 171
366 Chatter Prediction of Curved Thin-walled Parts Considering Variation of Dynamic Characteristics Based on Acoustic Signals Acquisition

Authors: Damous Mohamed, Zeroudi Nasredine

Abstract:

High-speed milling of thin-walled parts with complex curvilinear profiles often encounters machining instability, commonly referred to as chatter. This phenomenon arises due to the dynamic interaction between the cutting tool and the part, exacerbated by the part's low rigidity and varying dynamic characteristics along the tool path. This research presents a dynamic model specifically developed to predict machining stability for such curved thin-walled components. The model employs the semi-discretization method, segmenting the tool trajectory into small, straight elements to locally approximate the behavior of an inclined plane. Dynamic characteristics for each segment are extracted through experimental modal analysis and incorporated into the simulation model to generate global stability lobe diagrams. Validation of the model is conducted through cutting tests where acoustic intensity is measured to detect instabilities. The experimental data align closely with the predicted stability limits, confirming the model's accuracy and effectiveness. This work provides a comprehensive approach to enhancing machining stability predictions, thereby improving the efficiency and quality of high-speed milling operations for thin-walled parts.

Keywords: chatter, curved thin-walled part, semi-discretization method, stability lobe diagrams

Procedia PDF Downloads 26
365 Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising

Authors: Jianwei Ma, Diriba Gemechu

Abstract:

In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.

Keywords: anisotropic total fractional order variation, fractional order bounded variation, seismic random noise attenuation, split Bregman algorithm

Procedia PDF Downloads 207
364 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling

Authors: Aamna Lawrence, Ashutosh Mishra

Abstract:

Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.

Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor

Procedia PDF Downloads 128
363 Budd-Chiari Syndrome: Common Presentation, Rare Disease

Authors: Aadil Khan, Yasser Chomayil, P. P. Venugopalan

Abstract:

Background: Budd-Chiari syndrome is caused by thrombosis of the hepatic veins and/or the thrombosis of the intrahepatic or suprahepatic IVC. The etiology remains idiopathic in 16% -35% of cases. Malignancy, rheumatological disorder, myeloproliferative disease, inheritable coagulopathy, infection or hyperestrogen state can be identified in many cases. Methodology: Review of case records of the patient presented to Aster Medcity, Emergency Department, Cochin. Introduction:17 years old female was presented to ED with fever, jaundice and abdominal distention since 1 week. O/E: Pallor+, icterus+. Abdomen- gross distension+, shifting dullness+, generalized anasarca+. USG abdomen showed hepatomegaly with mild coarse echotexture and moderate to gross ascites. CT abdomen and chest showed hepatomegaly with thrombosis of all three hepatic vein and moderate ascites suggestive of Budd-Chiari syndrome. Patient was taken for catheter vein thrombolysis. Venogram done the next day revealed almost > 50% opening of the right hepatic vein. Concurrent doppler showed colour and doppler signals in middle hepatic veins. She gradually improved and was discharged home on anticoagulant and adviced regular follow up. Conclusion: Being a rare disease in this young population, high suspicion is required when evaluating young patients with abdominal pain and jaundice.

Keywords: Budd-Chiari syndrome, rare disease, abdominal pain, India

Procedia PDF Downloads 277
362 Coding and Decoding versus Space Diversity for ‎Rayleigh Fading Radio Frequency Channels ‎

Authors: Ahmed Mahmoud Ahmed Abouelmagd

Abstract:

The diversity is the usual remedy of the transmitted signal level variations (Fading phenomena) in radio frequency channels. Diversity techniques utilize two or more copies of a signal and combine those signals to combat fading. The basic concept of diversity is to transmit the signal via several independent diversity branches to get independent signal replicas via time – frequency - space - and polarization diversity domains. Coding and decoding processes can be an alternative remedy for fading phenomena, it cannot increase the channel capacity, but it can improve the error performance. In this paper we propose the use of replication decoding with BCH code class, and Viterbi decoding algorithm with convolution coding; as examples of coding and decoding processes. The results are compared to those obtained from two optimized selection space diversity techniques. The performance of Rayleigh fading channel, as the model considered for radio frequency channels, is evaluated for each case. The evaluation results show that the coding and decoding approaches, especially the BCH coding approach with replication decoding scheme, give better performance compared to that of selection space diversity optimization approaches. Also, an approach for combining the coding and decoding diversity as well as the space diversity is considered, the main disadvantage of this approach is its complexity but it yields good performance results.

Keywords: Rayleigh fading, diversity, BCH codes, Replication decoding, ‎convolution coding, viterbi decoding, space diversity

Procedia PDF Downloads 442
361 A Fast Calculation Approach for Position Identification in a Distance Space

Authors: Dawei Cai, Yuya Tokuda

Abstract:

The market of localization based service (LBS) is expanding. The acquisition of physical location is the fundamental basis for LBS. GPS, the de facto standard for outdoor localization, does not work well in indoor environment due to the blocking of signals by walls and ceiling. To acquire high accurate localization in an indoor environment, many techniques have been developed. Triangulation approach is often used for identifying the location, but a heavy and complex computation is necessary to calculate the location of the distances between the object and several source points. This computation is also time and power consumption, and not favorable to a mobile device that needs a long action life with battery. To provide a low power consumption approach for a mobile device, this paper presents a fast calculation approach to identify the location of the object without online solving solutions to simultaneous quadratic equations. In our approach, we divide the location identification into two parts, one is offline, and other is online. In offline mode, we make a mapping process that maps the location area to distance space and find a simple formula that can be used to identify the location of the object online with very light computation. The characteristic of the approach is a good tradeoff between the accuracy and computational amount. Therefore, this approach can be used in smartphone and other mobile devices that need a long work time. To show the performance, some simulation experimental results are provided also in the paper.

Keywords: indoor localization, location based service, triangulation, fast calculation, mobile device

Procedia PDF Downloads 174
360 Computer Network Applications, Practical Implementations and Structural Control System Representations

Authors: El Miloudi Djelloul

Abstract:

The computer network play an important position for practical implementations of the differently system. To implement a system into network above all is needed to know all the configurations, which is responsible to be a part of the system, and to give adequate information and solution in realtime. So if want to implement this system for example in the school or relevant institutions, the first step is to analyze the types of model which is needed to be configured and another important step is to organize the works in the context of devices, as a part of the general system. Often before configuration, as important point is descriptions and documentations from all the works into the respective process, and then to organize in the aspect of problem-solving. The computer network as critic infrastructure is very specific so the paper present the effectiveness solutions in the structured aspect viewed from one side, and another side is, than the paper reflect the positive aspect in the context of modeling and block schema presentations as an better alternative to solve the specific problem because of continually distortions of the system from the line of devices, programs and signals or packed collisions, which are in movement from one computer node to another nodes.

Keywords: local area networks, LANs, block schema presentations, computer network system, computer node, critical infrastructure packed collisions, structural control system representations, computer network, implementations, modeling structural representations, companies, computers, context, control systems, internet, software

Procedia PDF Downloads 365
359 Analysis of Ionosphere Anomaly Before Great Earthquake in Java on 2009 Using GPS Tec Data

Authors: Aldilla Damayanti Purnama Ratri, Hendri Subakti, Buldan Muslim

Abstract:

Ionosphere’s anomalies as an effect of earthquake activity is a phenomenon that is now being studied in seismo-ionospheric coupling. Generally, variation in the ionosphere caused by earthquake activity is weaker than the interference generated by different source, such as geomagnetic storms. However, disturbances of geomagnetic storms show a more global behavior, while the seismo-ionospheric anomalies occur only locally in the area which is largely determined by magnitude of the earthquake. It show that the earthquake activity is unique and because of its uniqueness it has been much research done thus expected to give clues as early warning before earthquake. One of the research that has been developed at this time is the approach of seismo-ionospheric-coupling. This study related the state in the lithosphere-atmosphere and ionosphere before and when earthquake occur. This paper choose the total electron content in a vertical (VTEC) in the ionosphere as a parameter. Total Electron Content (TEC) is defined as the amount of electron in vertical column (cylinder) with cross-section of 1m2 along GPS signal trajectory in ionosphere at around 350 km of height. Based on the analysis of data obtained from the LAPAN agency to identify abnormal signals by statistical methods, obtained that there are an anomaly in the ionosphere is characterized by decreasing of electron content of the ionosphere at 1 TECU before the earthquake occurred. Decreasing of VTEC is not associated with magnetic storm that is indicated as an earthquake precursor. This is supported by the Dst index showed no magnetic interference.

Keywords: earthquake, DST Index, ionosphere, seismoionospheric coupling, VTEC

Procedia PDF Downloads 585