Search results for: time prediction algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20465

Search results for: time prediction algorithms

16595 CFD Simulation for Flow Behavior in Boiling Water Reactor Vessel and Upper Pool under Decommissioning Condition

Authors: Y. T. Ku, S. W. Chen, J. R. Wang, C. Shih, Y. F. Chang

Abstract:

In order to respond the policy decision of non-nuclear homes, Tai Power Company (TPC) will provide the decommissioning project of Kuosheng Nuclear power plant (KSNPP) to meet the regulatory requirement in near future. In this study, the computational fluid dynamics (CFD) methodology has been employed to develop a flow prediction model for boiling water reactor (BWR) with upper pool under decommissioning stage. The model can be utilized to investigate the flow behavior as the vessel combined with upper pool and continuity cooling system. At normal operating condition, different parameters are obtained for the full fluid area, including velocity, mass flow, and mixing phenomenon in the reactor pressure vessel (RPV) and upper pool. Through the efforts of the study, an integrated simulation model will be developed for flow field analysis of decommissioning KSNPP under normal operating condition. It can be expected that a basis result for future analysis application of TPC can be provide from this study.

Keywords: CFD, BWR, decommissioning, upper pool

Procedia PDF Downloads 245
16594 Multimodal Integration of EEG, fMRI and Positron Emission Tomography Data Using Principal Component Analysis for Prognosis in Coma Patients

Authors: Denis Jordan, Daniel Golkowski, Mathias Lukas, Katharina Merz, Caroline Mlynarcik, Max Maurer, Valentin Riedl, Stefan Foerster, Eberhard F. Kochs, Andreas Bender, Ruediger Ilg

Abstract:

Introduction: So far, clinical assessments that rely on behavioral responses to differentiate coma states or even predict outcome in coma patients are unreliable, e.g. because of some patients’ motor disabilities. The present study was aimed to provide prognosis in coma patients using markers from electroencephalogram (EEG), blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) and [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET). Unsuperwised principal component analysis (PCA) was used for multimodal integration of markers. Methods: Approved by the local ethics committee of the Technical University of Munich (Germany) 20 patients (aged 18-89) with severe brain damage were acquired through intensive care units at the Klinikum rechts der Isar in Munich and at the Therapiezentrum Burgau (Germany). At the day of EEG/fMRI/PET measurement (date I) patients (<3.5 month in coma) were grouped in the minimal conscious state (MCS) or vegetative state (VS) on the basis of their clinical presentation (coma recovery scale-revised, CRS-R). Follow-up assessment (date II) was also based on CRS-R in a period of 8 to 24 month after date I. At date I, 63 channel EEG (Brain Products, Gilching, Germany) was recorded outside the scanner, and subsequently simultaneous FDG-PET/fMRI was acquired on an integrated Siemens Biograph mMR 3T scanner (Siemens Healthineers, Erlangen Germany). Power spectral densities, permutation entropy (PE) and symbolic transfer entropy (STE) were calculated in/between frontal, temporal, parietal and occipital EEG channels. PE and STE are based on symbolic time series analysis and were already introduced as robust markers separating wakefulness from unconsciousness in EEG during general anesthesia. While PE quantifies the regularity structure of the neighboring order of signal values (a surrogate of cortical information processing), STE reflects information transfer between two signals (a surrogate of directed connectivity in cortical networks). fMRI was carried out using SPM12 (Wellcome Trust Center for Neuroimaging, University of London, UK). Functional images were realigned, segmented, normalized and smoothed. PET was acquired for 45 minutes in list-mode. For absolute quantification of brain’s glucose consumption rate in FDG-PET, kinetic modelling was performed with Patlak’s plot method. BOLD signal intensity in fMRI and glucose uptake in PET was calculated in 8 distinct cortical areas. PCA was performed over all markers from EEG/fMRI/PET. Prognosis (persistent VS and deceased patients vs. recovery to MCS/awake from date I to date II) was evaluated using the area under the curve (AUC) including bootstrap confidence intervals (CI, *: p<0.05). Results: Prognosis was reliably indicated by the first component of PCA (AUC=0.99*, CI=0.92-1.00) showing a higher AUC when compared to the best single markers (EEG: AUC<0.96*, fMRI: AUC<0.86*, PET: AUC<0.60). CRS-R did not show prediction (AUC=0.51, CI=0.29-0.78). Conclusion: In a multimodal analysis of EEG/fMRI/PET in coma patients, PCA lead to a reliable prognosis. The impact of this result is evident, as clinical estimates of prognosis are inapt at time and could be supported by quantitative biomarkers from EEG, fMRI and PET. Due to the small sample size, further investigations are required, in particular allowing superwised learning instead of the basic approach of unsuperwised PCA.

Keywords: coma states and prognosis, electroencephalogram, entropy, functional magnetic resonance imaging, machine learning, positron emission tomography, principal component analysis

Procedia PDF Downloads 326
16593 Hybrid Feature Selection Method for Sentiment Classification of Movie Reviews

Authors: Vishnu Goyal, Basant Agarwal

Abstract:

Sentiment analysis research provides methods for identifying the people’s opinion written in blogs, reviews, social networking websites etc. Sentiment analysis is to understand what opinion people have about any given entity, object or thing. Sentiment analysis research can be broadly categorised into three types of approaches i.e. semantic orientation, machine learning and lexicon based approaches. Feature selection methods improve the performance of the machine learning algorithms by eliminating the irrelevant features. Information gain feature selection method has been considered best method for sentiment analysis; however, it has the drawback of selection of threshold. Therefore, in this paper, we propose a hybrid feature selection methods comprising of information gain and proposed feature selection method. Initially, features are selected using Information Gain (IG) and further more noisy features are eliminated using the proposed feature selection method. Experimental results show the efficiency of the proposed feature selection methods.

Keywords: feature selection, sentiment analysis, hybrid feature selection

Procedia PDF Downloads 314
16592 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 229
16591 HTML5 Online Learning Application with Offline Web, Location Based, Animated Web, Multithread, and Real-Time Features

Authors: Sheetal R. Jadhwani, Daisy Sang, Chang-Shyh Peng

Abstract:

Web applications are an integral part of modem life. They are mostly based upon the HyperText Markup Language (HTML). While HTML meets the basic needs, there are some shortcomings. For example, applications can cease to work once user goes offline, real-time updates may be lagging, and user interface can freeze on computationally intensive tasks. The latest language specification HTML5 attempts to rectify the situation with new tools and protocols. This paper studies the new Web Storage, Geolocation, Web Worker, Canvas, and Web Socket APIs, and presents applications to test their features and efficiencies.

Keywords: HTML5, web worker, canvas, web socket

Procedia PDF Downloads 285
16590 Fabrication of Miniature Gear of Hastelloy X by WEDM Process

Authors: Bhupinder Singh, Joy Prakash Misra

Abstract:

This article provides the information regarding machining of hastelloy-X on wire electro spark machining (WEDM). Experimental investigation has been carried out by varying pulse-on time (TON), pulse-off time (TOFF), peak current (IP) and spark gap voltage (SV). Effect of these parameters is studied on material removal rate (MRR). Experiments are designed as per box-behnken design (BBD) technique of response surface methodology (RSM). Analysis of variance (ANOVA) results indicates that TON, TOFF, IP, SV, TON x IP are significant parameters that influenced the MRR, and it is depicted that value of MRR is more at high discharge energy (HDE) and less at low discharge energy (LDE). Furthermore, miniature impeller and miniature gear (OD≤10MM) is fabricated by WEDM at optimized condition.

Keywords: advanced manufacturing, WEDM, super alloy, gear

Procedia PDF Downloads 211
16589 Mutual Information Based Image Registration of Satellite Images Using PSO-GA Hybrid Algorithm

Authors: Dipti Patra, Guguloth Uma, Smita Pradhan

Abstract:

Registration is a fundamental task in image processing. It is used to transform different sets of data into one coordinate system, where data are acquired from different times, different viewing angles, and/or different sensors. The registration geometrically aligns two images (the reference and target images). Registration techniques are used in satellite images and it is important in order to be able to compare or integrate the data obtained from these different measurements. In this work, mutual information is considered as a similarity metric for registration of satellite images. The transformation is assumed to be a rigid transformation. An attempt has been made here to optimize the transformation function. The proposed image registration technique hybrid PSO-GA incorporates the notion of Particle Swarm Optimization and Genetic Algorithm and is used for finding the best optimum values of transformation parameters. The performance comparision obtained with the experiments on satellite images found that the proposed hybrid PSO-GA algorithm outperforms the other algorithms in terms of mutual information and registration accuracy.

Keywords: image registration, genetic algorithm, particle swarm optimization, hybrid PSO-GA algorithm and mutual information

Procedia PDF Downloads 392
16588 Revenue Management of Perishable Products Considering Freshness and Price Sensitive Customers

Authors: Onur Kaya, Halit Bayer

Abstract:

Global grocery and supermarket sales are among the largest markets in the world and perishable products such as fresh produce, dairy and meat constitute the biggest section of these markets. Due to their deterioration over time, the demand for these products depends highly on their freshness. They become totally obsolete after a certain amount of time causing a high amount of wastage and decreases in grocery profits. In addition, customers are asking for higher product variety in perishable product categories, leading to less predictable demand per product and to more out-dating. Effective management of these perishable products is an important issue since it is observed that billions of dollars’ worth of food is expired and wasted every month. We consider coordinated inventory and pricing decisions for perishable products with a time and price dependent random demand function. We use stochastic dynamic programming to model this system for both periodically-reviewed and continuously-reviewed inventory systems and prove certain structural characteristics of the optimal solution. We prove that the optimal ordering decision scenario has a monotone structure and the optimal price value decreases by time. However, the optimal price changes in a non-monotonic structure with respect to inventory size. We also analyze the effect of 1 different parameters on the optimal solution through numerical experiments. In addition, we analyze simple-to-implement heuristics, investigate their effectiveness and extract managerial insights. This study gives valuable insights about the management of perishable products in order to decrease wastage and increase profits.

Keywords: age-dependent demand, dynamic programming, perishable inventory, pricing

Procedia PDF Downloads 239
16587 Integrated Formulation of Project Scheduling and Material Procurement Considering Different Discount Options

Authors: Babak H. Tabrizi, Seyed Farid Ghaderi

Abstract:

On-time availability of materials in the construction sites plays an outstanding role in successful achievement of project’s deliverables. Thus, this paper has investigated formulation of project scheduling and material procurement at the same time, by a mixed-integer programming model, aiming to minimize/maximize penalty/reward to deliver the project and minimize material holding, ordering, and procurement costs, respectively. We have taken both all-units and incremental discount possibilities into consideration to address more flexibility from the procurement side with regard to real world conditions. Finally, the applicability and efficiency of the mathematical model is tested by different numerical examples.

Keywords: discount strategies, material purchasing, project planning, project scheduling

Procedia PDF Downloads 246
16586 Perfectly Keyless Commercial Vehicle

Authors: Shubha T., Latha H. K. E., Yogananth Karuppiah

Abstract:

Accessing and sharing automobiles will become much simpler thanks to the wide range of automotive use cases made possible by digital keys. This study aims to provide digital keys to car owners and drivers so they can lock or unlock their automobiles and start the engine using a smartphone or other Bluetooth low energy-enabled mobile device. Private automobile owners can digitally lend their car keys to family members or friends without having to physically meet them, possibly for a certain period of time. Owners of company automobile fleets can electronically distribute car keys to staff members, possibly granting access for a given day or length of time. Customers no longer need to physically pick up car keys at a rental desk because automobile owners can digitally transfer keys with them.

Keywords: NFC, BLE, CCC, digital key, OEM

Procedia PDF Downloads 129
16585 Towards Reliable Mobile Cloud Computing

Authors: Khaled Darwish, Islam El Madahh, Hoda Mohamed, Hadia El Hennawy

Abstract:

Cloud computing has been one of the fastest growing parts in IT industry mainly in the context of the future of the web where computing, communication, and storage services are main services provided for Internet users. Mobile Cloud Computing (MCC) is gaining stream which can be used to extend cloud computing functions, services and results to the world of future mobile applications and enables delivery of a large variety of cloud application to billions of smartphones and wearable devices. This paper describes reliability for MCC by determining the ability of a system or component to function correctly under stated conditions for a specified period of time to be able to deal with the estimation and management of high levels of lifetime engineering uncertainty and risks of failure. The assessment procedures consists of determine Mean Time between Failures (MTBF), Mean Time to Failure (MTTF), and availability percentages for main components in both cloud computing and MCC structures applied on single node OpenStack installation to analyze its performance with different settings governing the behavior of participants. Additionally, we presented several factors have a significant impact on rates of change overall cloud system reliability should be taken into account in order to deliver highly available cloud computing services for mobile consumers.

Keywords: cloud computing, mobile cloud computing, reliability, availability, OpenStack

Procedia PDF Downloads 383
16584 Multi-Level Attentional Network for Aspect-Based Sentiment Analysis

Authors: Xinyuan Liu, Xiaojun Jing, Yuan He, Junsheng Mu

Abstract:

Aspect-based Sentiment Analysis (ABSA) has attracted much attention due to its capacity to determine the sentiment polarity of the certain aspect in a sentence. In previous works, great significance of the interaction between aspect and sentence has been exhibited in ABSA. In consequence, a Multi-Level Attentional Networks (MLAN) is proposed. MLAN consists of four parts: Embedding Layer, Encoding Layer, Multi-Level Attentional (MLA) Layers and Final Prediction Layer. Among these parts, MLA Layers including Aspect Level Attentional (ALA) Layer and Interactive Attentional (ILA) Layer is the innovation of MLAN, whose function is to focus on the important information and obtain multiple levels’ attentional weighted representation of aspect and sentence. In the experiments, MLAN is compared with classical TD-LSTM, MemNet, RAM, ATAE-LSTM, IAN, AOA, LCR-Rot and AEN-GloVe on SemEval 2014 Dataset. The experimental results show that MLAN outperforms those state-of-the-art models greatly. And in case study, the works of ALA Layer and ILA Layer have been proven to be effective and interpretable.

Keywords: deep learning, aspect-based sentiment analysis, attention, natural language processing

Procedia PDF Downloads 124
16583 Aerodynamic Brake Study of Reducing Braking Distance for High-Speed Trains

Authors: Phatthara Surachon, Tosaphol Ratniyomchai, Thanatchai Kulworawanichpong

Abstract:

This paper presents an aerodynamic brake study of reducing braking distance for high-speed trains (HST) using aerodynamic brakes as inspiration from the applications on the commercial aircraft wings. In case of emergency, both braking distance and stopping time are longer than the usual situation. Therefore, the passenger safety and the HST driving control management are definitely obtained by reducing the time and distance of train braking during emergency situation. Due to the limited study and implementation of the aerodynamic brake in HST, the possibility in use and the effectiveness of the aerodynamic brake to the train dynamic movement during braking are analyzed and considered. Regarding the aircraft’s flaps that applied in the HST, the areas of the aerodynamic brake acted as an additional drag force during train braking are able to vary depending on the operating angle and the required dynamic braking force. The HST with a varying speed of 200 km/h to 350 km/h is taken as a case study of this paper. The results show that the stopping time and the brake distance are effectively reduced by the aerodynamic brakes. The mechanical brake and its maintenance are effectively getting this benefit by extending its lifetime for longer use.

Keywords: high-speed train, aerodynamic brake, brake distance, drag force

Procedia PDF Downloads 181
16582 Performance Evaluation of a Minimum Mean Square Error-Based Physical Sidelink Share Channel Receiver under Fading Channel

Authors: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis

Abstract:

Cellular Vehicle to Everything (C-V2X) is considered a promising solution for future autonomous driving. From Release 16 to Release 17, the Third Generation Partnership Project (3GPP) has introduced the definitions and services for 5G New Radio (NR) V2X. Experience from previous generations has shown that establishing a simulator for C-V2X communications is an essential preliminary step to achieve reliable and stable communication links. This paper proposes a complete framework of a link-level simulator based on the 3GPP specifications for the Physical Sidelink Share Channel (PSSCH) of the 5G NR Physical Layer (PHY). In this framework, several algorithms in the receiver part, i.e., sliding window in channel estimation and Minimum Mean Square Error (MMSE)-based equalization, are developed. Finally, the performance of the developed PSSCH receiver is validated through extensive simulations under different assumptions.

Keywords: C-V2X, channel estimation, link-level simulator, sidelink, 3GPP

Procedia PDF Downloads 107
16581 SFO-ECRSEP: Sensor Field Optimızation Based Ecrsep For Heterogeneous WSNS

Authors: Gagandeep Singh

Abstract:

The sensor field optimization is a serious issue in WSNs and has been ignored by many researchers. As in numerous real-time sensing fields the sensor nodes on the corners i.e. on the segment boundaries will become lifeless early because no extraordinary safety is presented for them. Accordingly, in this research work the central objective is on the segment based optimization by separating the sensor field between advance and normal segments. The inspiration at the back this sensor field optimization is to extend the time spam when the first sensor node dies. For the reason that in normal sensor nodes which were exist on the borders may become lifeless early because the space among them and the base station is more so they consume more power so at last will become lifeless soon.

Keywords: WSNs, ECRSEP, SEP, field optimization, energy

Procedia PDF Downloads 284
16580 Time Pressure and Its Effect at Tactical Level of Disaster Management

Authors: Agoston Restas

Abstract:

Introduction: In case of managing disasters decision makers can face many times such a special situation where any pre-sign of the drastically change is missing therefore the improvised decision making can be required. The complexity, ambiguity, uncertainty or the volatility of the situation can require many times the improvisation as decision making. It can be taken at any level of the management (strategic, operational and tactical) but at tactical level the main reason of the improvisation is surely time pressure. It is certainly the biggest problem during the management. Methods: The author used different tools and methods to achieve his goals; one of them was the study of the relevant literature, the other one was his own experience as a firefighting manager. Other results come from two surveys that are referred to; one of them was an essay analysis, the second one was a word association test, specially created for the research. Results and discussion: This article proves that, in certain situations, the multi-criteria, evaluating decision-making processes simply cannot be used or only in a limited manner. However, it can be seen that managers, directors or commanders are many times in situations that simply cannot be ignored when making decisions which should be made in a short time. The functional background of decisions made in a short time, their mechanism, which is different from the conventional, was studied lately and this special decision procedure was given the name recognition-primed decision. In the article, author illustrates the limits of the possibilities of analytical decision-making, presents the general operating mechanism of recognition-primed decision-making, elaborates on its special model relevant to managers at tactical level, as well as explore and systemize the factors that facilitate (catalyze) the processes with an example with fire managers.

Keywords: decision making, disaster managers, recognition primed decision, model for making decisions in emergencies

Procedia PDF Downloads 242
16579 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading

Authors: Jerome Joshi

Abstract:

The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.

Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus

Procedia PDF Downloads 62
16578 Influence of Various Disaster Scenarios Assumption to the Advance Creation of Wide-Area Evacuation Plan Confronting Natural Disasters

Authors: Nemat Mohammadi, Yuki Nakayama

Abstract:

After occurring Great East Japan earthquake and as a consequence the invasion of an extremely large Tsunami to the city, obligated many local governments to take into account certainly these kinds of issues. Poor preparation of local governments to deal with such kinds of disasters at that time and consequently lack of assistance delivery for local residents caused thousands of civilian casualties as well as billion dollars of economic damages. Those local governments who are responsible for governing such coastal areas, have to consider some countermeasures to deal with these natural disasters, prepare a comprehensive evacuation plan and contrive some feasible emergency plans for the purpose of victims’ reduction as much as possible. Under this evacuation plan, the local government should contemplate more about the traffic congestion during wide-area evacuation operation and estimate the minimum essential time to evacuate the whole city completely. This challenge will become more complicated for the government when the people who are affected by disasters are not only limited to the normal informed citizens but also some pregnant women, physically handicapped persons, old age citizens and foreigners or tourists who are not familiar with that conditions as well as local language are involved. The important issue to deal with this challenge is that how to inform these people to take a proper action right away noticing the Tsunami is coming. After overcoming this problem, next significant challenge is even more considerable. Next challenge is to evacuate the whole residents in a short period of time from the threated area to the safer shelters. In fact, most of the citizens will use their own vehicles to evacuate to the designed shelters and some of them will use the shuttle buses which are provided by local governments. The problem will arise when all residents want to escape from the threated area simultaneously and consequently creating a traffic jam on evacuation routes which will cause to prolong the evacuation time. Hence, this research mostly aims to calculate the minimum essential time to evacuate each region inside the threated area and find the evacuation start point for each region separately. This result will help the local government to visualize the situations and conditions during disasters and assist them to reduce the possible traffic jam on evacuation routes and consequently suggesting a comprehensive wide-area evacuation plan during natural disasters.

Keywords: BPR formula, disaster scenarios, evacuation completion time, wide-area evacuation

Procedia PDF Downloads 196
16577 Coupled Spacecraft Orbital and Attitude Modeling and Simulation in Multi-Complex Modes

Authors: Amr Abdel Azim Ali, G. A. Elsheikh, Moutaz Hegazy

Abstract:

This paper presents verification of a modeling and simulation for a Spacecraft (SC) attitude and orbit control system. Detailed formulation of coupled SC orbital and attitude equations of motion is performed in order to achieve accepted accuracy to meet the requirements of multitargets tracking and orbit correction complex modes. Correction of the target parameter based on the estimated state vector during shooting time to enhance pointing accuracy is considered. Time-optimal nonlinear feedback control technique was used in order to take full advantage of the maximum torques that the controller can deliver. This simulation provides options for visualizing SC trajectory and attitude in a 3D environment by including an interface with V-Realm Builder and VR Sink in Simulink/MATLAB. Verification data confirms the simulation results, ensuring that the model and the proposed control law can be used successfully for large and fast tracking and is robust enough to keep the pointing accuracy within the desired limits with considerable uncertainty in inertia and control torque.

Keywords: attitude and orbit control, time-optimal nonlinear feedback control, modeling and simulation, pointing accuracy, maximum torques

Procedia PDF Downloads 309
16576 Flexible Furniture in Urban Open Spaces: A Tool to Achieve Social Sustainability

Authors: Mahsa Ghafouri, Guita Farivarsadri

Abstract:

In urban open spaces, furniture plays a crucial role in meeting various needs of the users over time. Furniture consists of elements that not only can facilitate physical needs individually but also fulfill social, psychological, and cultural demands on an urban scale. Creating adjustable urban spaces and using flexible furniture can provide the possibility of using urban spaces for a wide range of uses and activities and allow the engagement of users with distinct abilities and limitations in these activities. Flexibility in urban furniture can be seen as designing a number of modular components that are movable, expandable, adjustable, and changeable to accommodate various functions. Although there is a great amount of research related to flexibility and its distinct insights into achieving spaces that can cope with changing demands, this fundamental issue is often neglected in the design of urban furniture. However, in the long term, to address changing public needs over time, it can be logical to bring this quality into the design process to make spaces that can be sustained for a long time. This study aims to first introduce diverse kinds of flexible furniture that can be designed for urban public spaces and then to realize how this flexible furniture can improve the quality of public open spaces and social interaction and make them more adaptable over time and, as a result, achieve social sustainability. This research is descriptive and is mainly based on an extensive literature review and the analysis and classification of existing examples around the world. This research tends to illustrate various kinds of approaches that can help designers create flexible furniture to enhance the sustainability and quality of urban open spaces and, in this way, act as a guide for urban designers in this respect.

Keywords: flexible furniture, flexible design, urban open spaces, adaptability, moveability, social sustainability

Procedia PDF Downloads 40
16575 A Composite Beam Element Based on Global-Local Superposition Theory for Prediction of Delamination in Composite Laminates

Authors: Charles Mota Possatti Júnior, André Schwanz de Lima, Maurício Vicente Donadon, Alfredo Rocha de Faria

Abstract:

An interlaminar damage model is combined with a beam element formulation based on global-local superposition to assess delamination in composite laminates. The variations in the mechanical properties in the laminate, generated by the presence of delamination, are calculated as a function of the displacements in the interface layers. The global-local superposition of displacement fields ensures the zig-zag behaviour of stresses and displacement, and the number of degrees of freedom (DOFs) is independent of the number of layers. The displacements and stresses are calculated as a function of DOFs commonly used in traditional beam elements. Finally, the finite element(FE) formulation is extended to handle cases of different thicknesses, and then the FE model predictions are compared with results obtained from analytical solutions and commercial finite element codes.

Keywords: delamination, global-local superposition theory, single beam element, zig-zag, interlaminar damage model

Procedia PDF Downloads 104
16574 To Design an Architectural Model for On-Shore Oil Monitoring Using Wireless Sensor Network System

Authors: Saurabh Shukla, G. N. Pandey

Abstract:

In recent times, oil exploration and monitoring in on-shore areas have gained much importance considering the fact that in India the oil import is 62 percent of the total imports. Thus, architectural model like wireless sensor network to monitor on-shore deep sea oil well is being developed to get better estimate of the oil prospects. The problem we are facing nowadays that we have very few restricted areas of oil left today. Countries like India don’t have much large areas and resources for oil and this problem with most of the countries that’s why it has become a major problem when we are talking about oil exploration in on-shore areas also the increase of oil prices has further ignited the problem. For this the use of wireless network system having relative simplicity, smallness in size and affordable cost of wireless sensor nodes permit heavy deployment in on-shore places for monitoring oil wells. Deployment of wireless sensor network in large areas will surely reduce the cost it will be very much cost effective. The objective of this system is to send real time information of oil monitoring to the regulatory and welfare authorities so that suitable action could be taken. This system architecture is composed of sensor network, processing/transmission unit and a server. This wireless sensor network system could remotely monitor the real time data of oil exploration and monitoring condition in the identified areas. For wireless sensor networks, the systems are wireless, have scarce power, are real-time, utilize sensors and actuators as interfaces, have dynamically changing sets of resources, aggregate behaviour is important and location is critical. In this system a communication is done between the server and remotely placed sensors. The server gives the real time oil exploration and monitoring conditions to the welfare authorities.

Keywords: sensor, wireless sensor network, oil, sensor, on-shore level

Procedia PDF Downloads 428
16573 Efficacy of Opicapone and Levodopa with Different Levodopa Daily Doses in Parkinson’s Disease Patients with Early Motor Fluctuations: Findings from the Korean ADOPTION Study

Authors: Jee-Young Lee, Joaquim J. Ferreira, Hyeo-il Ma, José-Francisco Rocha, Beomseok Jeon

Abstract:

The effective management of wearing-off is a key driver of medication changes for patients with Parkinson’s disease (PD) treated with levodopa (L-DOPA). While L-DOPA is well tolerated and efficacious, its clinical utility over time is often limited by the development of complications such as dyskinesia. Still, common first-line option includes adjusting the daily L-DOPA dose followed by adjunctive therapies usually counting for the L-DOPA equivalent daily dose (LEDD). The LEDD conversion formulae are a tool used to compare the equivalence of anti-PD medications. The aim of this work is to compare the effects of opicapone (OPC) 50 mg, a catechol-O-methyltransferase (COMT) inhibitor, and an additional 100 mg dose of L-DOPA in reducing the off time in PD patients with early motor fluctuations receiving different daily L-DOPA doses. OPC was found to be well tolerated and efficacious in advanced PD population. This work utilized patients' home diary data from a 4-week Phase 2 pharmacokinetics clinical study. The Korean ADOPTION study randomized (1:1) patients with PD and early motor fluctuations treated with up to 600 mg of L-DOPA given 3–4 times daily. The main endpoint was change from baseline in off time in the subgroup of patients receiving 300–400 mg/day L-DOPA at baseline plus OPC 50 mg and in the subgroup receiving >300 mg/day L-DOPA at baseline plus an additional dose of L-DOPA 100 mg. Of the 86 patients included in this subgroup analysis, 39 received OPC 50 mg and 47 L-DOPA 100 mg. At baseline, both L-DOPA total daily dose and LEDD were lower in the L-DOPA 300–400 mg/day plus OPC 50 mg group than in the L-DOPA >300 mg/day plus L-DOPA 100 mg. However, at Week 4, LEDD was similar between the two groups. The mean (±standard error) reduction in off time was approximately three-fold greater for the OPC 50 mg than for the L-DOPA 100 mg group, being -63.0 (14.6) minutes for patients treated with L-DOPA 300–400 mg/day plus OPC 50 mg, and -22.1 (9.3) minutes for those receiving L-DOPA >300 mg/day plus L-DOPA 100 mg. In conclusion, despite similar LEDD, OPC demonstrated a significantly greater reduction in off time when compared to an additional 100 mg L-DOPA dose. The effect of OPC appears to be LEDD independent, suggesting that caution should be exercised when employing LEDD to guide treatment decisions as this does not take into account the timing of each dose, onset, duration of therapeutic effect and individual responsiveness. Additionally, OPC could be used for keeping the L-DOPA dose as low as possible for as long as possible to avoid the development of motor complications which are a significant source of disability.

Keywords: opicapone, levodopa, pharmacokinetics, off-time

Procedia PDF Downloads 50
16572 CFD Prediction of the Round Elbow Fitting Loss Coefficient

Authors: Ana Paula P. dos Santos, Claudia R. Andrade, Edson L. Zaparoli

Abstract:

Pressure loss in ductworks is an important factor to be considered in design of engineering systems such as power-plants, refineries, HVAC systems to reduce energy costs. Ductwork can be composed by straight ducts and different types of fittings (elbows, transitions, converging and diverging tees and wyes). Duct fittings are significant sources of pressure loss in fluid distribution systems. Fitting losses can be even more significant than equipment components such as coils, filters, and dampers. At the present work, a conventional 90o round elbow under turbulent incompressible airflow is studied. Mass, momentum, and k-e turbulence model equations are solved employing the finite volume method. The SIMPLE algorithm is used for the pressure-velocity coupling. In order to validate the numerical tool, the elbow pressure loss coefficient is determined using the same conditions to compare with ASHRAE database. Furthermore, the effect of Reynolds number variation on the elbow pressure loss coefficient is investigated. These results can be useful to perform better preliminary design of air distribution ductworks in air conditioning systems.

Keywords: duct fitting, pressure loss, elbow, thermodynamics

Procedia PDF Downloads 377
16571 Performance of Heifer Camels (Camelus dromedarius) on Native Range Supplemented with Different Energy Levels

Authors: Shehu, B., Muhammad, B. F., Madigawa, I. L., H. A. Alkali

Abstract:

The study was conducted to assess heifer camel behavior and live weight changes on native range supplemented with different energy levels. A total of nine camels aged between 2 and 3 years were randomly allotted into three groups and supplemented with 3400, 3600 and 3800 Kcal and designated A, B and C, respectively. The data obtained was analyzed for variance in a Completely Randomized Design. The heifers utilized average of 371.70 min/day (64% of daylight time) browsing on native pasture and 2.30 min/day (6%) sand bathing. A significantly higher mean time was spent by heifers on browsing Leptadenia hastata (P<0.001), Dichrostachys cinerea (P<0.01), Acacia nilotica (P<0.001) and Ziziphus spina-christi (P<0.05) in early dry season (January). No significant difference was recorded on browsing time on Tamarindus indica, Adansonia digitata, Piliostigma reticulatum, Parkia biglobosaand Azadirachta indica. No significant (P<0.05) liveweight change was recorded on she-camels due to the three energy levels. It was concluded that nutritive browse species in the study area could meet camel nutrient requirements including energy. Further research on effect of period on camel nutrients requirement in different physiological conditions is recommended.

Keywords: heifer, camel, grazing, pasture

Procedia PDF Downloads 531
16570 Energy-Efficient Clustering Protocol in Wireless Sensor Networks for Healthcare Monitoring

Authors: Ebrahim Farahmand, Ali Mahani

Abstract:

Wireless sensor networks (WSNs) can facilitate continuous monitoring of patients and increase early detection of emergency conditions and diseases. High density WSNs helps us to accurately monitor a remote environment by intelligently combining the data from the individual nodes. Due to energy capacity limitation of sensors, enhancing the lifetime and the reliability of WSNs are important factors in designing of these networks. The clustering strategies are verified as effective and practical algorithms for reducing energy consumption in WSNs and can tackle WSNs limitations. In this paper, an Energy-efficient weight-based Clustering Protocol (EWCP) is presented. Artificial retina is selected as a case study of WSNs applied in body sensors. Cluster heads’ (CHs) selection is equipped with energy efficient parameters. Moreover, cluster members are selected based on their distance to the selected CHs. Comparing with the other benchmark protocols, the lifetime of EWCP is improved significantly.

Keywords: WSN, healthcare monitoring, weighted based clustering, lifetime

Procedia PDF Downloads 295
16569 Study of Seismic Damage Reinforced Concrete Frames in Variable Height with Logistic Statistic Function Distribution

Authors: P. Zarfam, M. Mansouri Baghbaderani

Abstract:

In seismic design, the proper reaction to the earthquake and the correct and accurate prediction of its subsequent effects on the structure are critical. Choose a proper probability distribution, which gives a more realistic probability of the structure's damage rate, is essential in damage discussions. With the development of design based on performance, analytical method of modal push over as an inexpensive, efficacious, and quick one in the estimation of the structures' seismic response is broadly used in engineering contexts. In this research three concrete frames of 3, 6, and 13 stories are analyzed in non-linear modal push over by 30 different earthquake records by OpenSEES software, then the detriment indexes of roof's displacement and relative displacement ratio of the stories are calculated by two parameters: peak ground acceleration and spectra acceleration. These indexes are used to establish the value of damage relations with log-normal distribution and logistics distribution. Finally the value of these relations is compared and the effect of height on the mentioned damage relations is studied, too.

Keywords: modal pushover analysis, concrete structure, seismic damage, log-normal distribution, logistic distribution

Procedia PDF Downloads 233
16568 Robust Medical Image Watermarking Using Frequency Domain and Least Significant Bits Algorithms

Authors: Volkan Kaya, Ersin Elbasi

Abstract:

Watermarking and stenography are getting importance recently because of copyright protection and authentication. In watermarking we embed stamp, logo, noise or image to multimedia elements such as image, video, audio, animation and text. There are several works have been done in watermarking for different purposes. In this research work, we used watermarking techniques to embed patient information into the medical magnetic resonance (MR) images. There are two methods have been used; frequency domain (Digital Wavelet Transform-DWT, Digital Cosine Transform-DCT, and Digital Fourier Transform-DFT) and spatial domain (Least Significant Bits-LSB) domain. Experimental results show that embedding in frequency domains resist against one type of attacks, and embedding in spatial domain is resist against another group of attacks. Peak Signal Noise Ratio (PSNR) and Similarity Ratio (SR) values are two measurement values for testing. These two values give very promising result for information hiding in medical MR images.

Keywords: watermarking, medical image, frequency domain, least significant bits, security

Procedia PDF Downloads 276
16567 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties

Authors: Sonal Budhiraja, Biswabrata Pradhan

Abstract:

This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.

Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval

Procedia PDF Downloads 232
16566 Maintenance Wrench Time Improvement Project

Authors: Awadh O. Al-Anazi

Abstract:

As part of the organizational needs toward successful maintaining activities, a proper management system need to be put in place, ensuring the effectiveness of maintenance activities. The management system shall clearly describes the process of identifying, prioritizing, planning, scheduling, execution, and providing valuable feedback for all maintenance activities. Completion and accuracy of the system with proper implementation shall provide the organization with a strong platform for effective maintenance activities that are resulted in efficient outcomes toward business success. The purpose of this research was to introduce a practical tool for measuring the maintenance efficiency level within Saudi organizations. A comprehensive study was launched across many maintenance professionals throughout Saudi leading organizations. The study covered five main categories: work process, identification, planning and scheduling, execution, and performance monitoring. Each category was evaluated across many dimensions to determine its current effectiveness through a five-level scale from 'process is not there' to 'mature implementation'. Wide participation was received, responses were analyzed, and the study was concluded by highlighting major gaps and improvement opportunities within Saudi organizations. One effective implementation of the efficiency enhancement efforts was deployed in Saudi Kayan (one of Sabic affiliates). Below details describes the project outcomes: SK overall maintenance wrench time was measured at 20% (on average) from the total daily working time. The assessment indicates the appearance of several organizational gaps, such as a high amount of reactive work, poor coordination and teamwork, Unclear roles and responsibilities, as well as underutilization of resources. Multidiscipline team was assigned to design and implement an appropriate work process that is capable to govern the execution process, improve the maintenance workforce efficiency, and maximize wrench time (targeting > 50%). The enhanced work process was introduced through brainstorming and wide benchmarking, incorporated with a proper change management plan and leadership sponsorship. The project was completed in 2018. Achieved Results: SK WT was improved to 50%, which resulted in 1) reducing the Average Notification completion time. 2) reducing maintenance expenses on OT and manpower support (3.6 MSAR Actual Saving from Budget within 6 months).

Keywords: efficiency, enhancement, maintenance, work force, wrench time

Procedia PDF Downloads 124