Search results for: nonlinear dynamic model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19930

Search results for: nonlinear dynamic model

17530 A Generalized Model for Performance Analysis of Airborne Radar in Clutter Scenario

Authors: Vinod Kumar Jaysaval, Prateek Agarwal

Abstract:

Performance prediction of airborne radar is a challenging and cumbersome task in clutter scenario for different types of targets. A generalized model requires to predict the performance of Radar for air targets as well as ground moving targets. In this paper, we propose a generalized model to bring out the performance of airborne radar for different Pulsed Repetition Frequency (PRF) as well as different type of targets. The model provides a platform to bring out different subsystem parameters for different applications and performance requirements under different types of clutter terrain.

Keywords: airborne radar, blind zone, clutter, probability of detection

Procedia PDF Downloads 470
17529 Enhancement Method of Network Traffic Anomaly Detection Model Based on Adversarial Training With Category Tags

Authors: Zhang Shuqi, Liu Dan

Abstract:

For the problems in intelligent network anomaly traffic detection models, such as low detection accuracy caused by the lack of training samples, poor effect with small sample attack detection, a classification model enhancement method, F-ACGAN(Flow Auxiliary Classifier Generative Adversarial Network) which introduces generative adversarial network and adversarial training, is proposed to solve these problems. Generating adversarial data with category labels could enhance the training effect and improve classification accuracy and model robustness. FACGAN consists of three steps: feature preprocess, which includes data type conversion, dimensionality reduction and normalization, etc.; A generative adversarial network model with feature learning ability is designed, and the sample generation effect of the model is improved through adversarial iterations between generator and discriminator. The adversarial disturbance factor of the gradient direction of the classification model is added to improve the diversity and antagonism of generated data and to promote the model to learn from adversarial classification features. The experiment of constructing a classification model with the UNSW-NB15 dataset shows that with the enhancement of FACGAN on the basic model, the classification accuracy has improved by 8.09%, and the score of F1 has improved by 6.94%.

Keywords: data imbalance, GAN, ACGAN, anomaly detection, adversarial training, data augmentation

Procedia PDF Downloads 105
17528 A Topological Approach for Motion Track Discrimination

Authors: Tegan H. Emerson, Colin C. Olson, George Stantchev, Jason A. Edelberg, Michael Wilson

Abstract:

Detecting small targets at range is difficult because there is not enough spatial information present in an image sub-region containing the target to use correlation-based methods to differentiate it from dynamic confusers present in the scene. Moreover, this lack of spatial information also disqualifies the use of most state-of-the-art deep learning image-based classifiers. Here, we use characteristics of target tracks extracted from video sequences as data from which to derive distinguishing topological features that help robustly differentiate targets of interest from confusers. In particular, we calculate persistent homology from time-delayed embeddings of dynamic statistics calculated from motion tracks extracted from a wide field-of-view video stream. In short, we use topological methods to extract features related to target motion dynamics that are useful for classification and disambiguation and show that small targets can be detected at range with high probability.

Keywords: motion tracks, persistence images, time-delay embedding, topological data analysis

Procedia PDF Downloads 114
17527 A Hybrid Model Tree and Logistic Regression Model for Prediction of Soil Shear Strength in Clay

Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari

Abstract:

Without a doubt, soil shear strength is the most important property of the soil. The majority of fatal and catastrophic geological accidents are related to shear strength failure of the soil. Therefore, its prediction is a matter of high importance. However, acquiring the shear strength is usually a cumbersome task that might need complicated laboratory testing. Therefore, prediction of it based on common and easy to get soil properties can simplify the projects substantially. In this paper, A hybrid model based on the classification and regression tree algorithm and logistic regression is proposed where each leaf of the tree is an independent regression model. A database of 189 points for clay soil, including Moisture content, liquid limit, plastic limit, clay content, and shear strength, is collected. The performance of the developed model compared to the existing models and equations using root mean squared error and coefficient of correlation.

Keywords: model tree, CART, logistic regression, soil shear strength

Procedia PDF Downloads 197
17526 Comparison of Seismic Response for Two RC Curved Bridges with Different Column Shapes

Authors: Nina N. Serdar, Jelena R. Pejović

Abstract:

This paper presents seismic risk assessment of two bridge structure, based on the probabilistic performance-based seismic assessment methodology. Both investigated bridges are tree span continuous RC curved bridges with the difference in column shapes. First bridge (type A) has a wall-type pier and second (type B) has a two-column bent with circular columns. Bridges are designed according to European standards: EN 1991-2, EN1992-1-1 and EN 1998-2. Aim of the performed analysis is to compare seismic behavior of these two structures and to detect the influence of column shapes on the seismic response. Seismic risk assessment is carried out by obtaining demand fragility curves. Non-linear model was constructed and time-history analysis was performed using thirty five pairs of horizontal ground motions selected to match site specific hazard. In performance based analysis, peak column drift ratio (CDR) was selected as engineering demand parameter (EDP). For seismic intensity measure (IM) spectral displacement was selected. Demand fragility curves that give probability of exceedance of certain value for chosen EDP were constructed and based on them conclusions were made.

Keywords: RC curved bridge, demand fragility curve, wall type column, nonlinear time-history analysis, circular column

Procedia PDF Downloads 341
17525 Parametric Study on the Behavior of Reinforced Concrete Continuous Beams Flexurally Strengthened with FRP Plates

Authors: Mohammed A. Sakr, Tarek M. Khalifa, Walid N. Mansour

Abstract:

External bonding of fiber reinforced polymer (FRP) plates to reinforced concrete (RC) beams is an effective technique for flexural strengthening. This paper presents an analytical parametric study on the behavior of RC continuous beams flexurally strengthened with externally bonded FRP plates on the upper and lower fibers, conducted using simple uniaxial nonlinear finite element model (UNFEM). UNFEM is able to estimate the load-carrying capacity, different failure modes and the interfacial stresses of RC continuous beams flexurally strengthened with externally bonded FRP plates on the upper and lower fibers. The study investigated the effect of five key parameters on the behavior and moment redistribution of FRP-reinforced continuous beams. The investigated parameters were the length of the FRP plate, the width and the thickness of the FRP plate, the ratio between the area of the FRP plate to the concrete area, the cohesive shear strength of the adhesive layer, and the concrete compressive strength. The investigation resulted in a number of important conclusions reflecting the effects of the studied parameters on the behavior of RC continuous beams flexurally strengthened with externally bonded FRP plates.

Keywords: continuous beams, parametric study, finite element, fiber reinforced polymer

Procedia PDF Downloads 371
17524 Broadband Optical Plasmonic Antennas Using Fano Resonance Effects

Authors: Siamak Dawazdah Emami, Amin Khodaei, Harith Bin Ahmad, Hairul A. Adbul-Rashid

Abstract:

The Fano resonance effect on plasmonic nanoparticle materials results in such materials possessing a number of unique optical properties, and the potential applicability for sensing, nonlinear devices and slow-light devices. A Fano resonance is a consequence of coherent interference between superradiant and subradiant hybridized plasmon modes. Incident light on subradiant modes will initiate excitation that results in superradiant modes, and these superradient modes possess zero or finite dipole moments alongside a comparable negligible coupling with light. This research work details the derivation of an electrodynamics coupling model for the interaction of dipolar transitions and radiation via plasmonic nanoclusters such as quadrimers, pentamers and heptamers. The directivity calculation is analyzed in order to qualify the redirection of emission. The geometry of a configured array of nanostructures strongly influenced the transmission and reflection properties, which subsequently resulted in the directivity of each antenna being related to the nanosphere size and gap distances between the nanospheres in each model’s structure. A well-separated configuration of nanospheres resulted in the structure behaving similarly to monomers, with spectra peaks of a broad superradiant mode being centered within the vicinity of 560 nm wavelength. Reducing the distance between ring nanospheres in pentamers and heptamers to 20~60 nm caused the coupling factor and charge distributions to increase and invoke a subradiant mode centered within the vicinity of 690 nm. Increasing the outside ring’s nanosphere distance from the centered nanospheres caused the coupling factor to decrease, with the coupling factor being inversely proportional to cubic of the distance between nanospheres. This phenomenon led to a dramatic decrease of the superradiant mode at a 200 nm distance between the central nanosphere and outer rings. Effects from a superradiant mode vanished beyond a 240 nm distance between central and outer ring nanospheres.

Keywords: fano resonance, optical antenna, plasmonic, nano-clusters

Procedia PDF Downloads 429
17523 Load Maximization of Two-Link Flexible Manipulator Using Suppression Vibration with Piezoelectric Transducer

Authors: Hamidreza Heidari, Abdollah Malmir Nasab

Abstract:

In this paper, the energy equations of a two-link flexible manipulator were extracted using the Euler-Bernoulli beam hypotheses. Applying Assumed mode and considering some finite degrees of freedom, we could obtain dynamic motions of each manipulator using Euler-Lagrange equations. Using its claws, the robots can carry a certain load with the ached control of vibrations for robot flexible links during the travelling path using the piezoceramics transducer; dynamic load carrying capacity increase. The traveling path of flexible robot claw has been taken from that of equivalent rigid manipulator and coupled; therefore to avoid the role of Euler-Bernoulli beam assumptions and linear strains, material and physical characteristics selection of robot cause deflection of link ends not exceed 5% of link length. To do so, the maximum load carrying capacity of robot is calculated at the horizontal plan. The increasing of robot load carrying capacity with vibration control is 53%.

Keywords: flexible link, DLCC, active control vibration, assumed mode method

Procedia PDF Downloads 397
17522 Prediction on Housing Price Based on Deep Learning

Authors: Li Yu, Chenlu Jiao, Hongrun Xin, Yan Wang, Kaiyang Wang

Abstract:

In order to study the impact of various factors on the housing price, we propose to build different prediction models based on deep learning to determine the existing data of the real estate in order to more accurately predict the housing price or its changing trend in the future. Considering that the factors which affect the housing price vary widely, the proposed prediction models include two categories. The first one is based on multiple characteristic factors of the real estate. We built Convolution Neural Network (CNN) prediction model and Long Short-Term Memory (LSTM) neural network prediction model based on deep learning, and logical regression model was implemented to make a comparison between these three models. Another prediction model is time series model. Based on deep learning, we proposed an LSTM-1 model purely regard to time series, then implementing and comparing the LSTM model and the Auto-Regressive and Moving Average (ARMA) model. In this paper, comprehensive study of the second-hand housing price in Beijing has been conducted from three aspects: crawling and analyzing, housing price predicting, and the result comparing. Ultimately the best model program was produced, which is of great significance to evaluation and prediction of the housing price in the real estate industry.

Keywords: deep learning, convolutional neural network, LSTM, housing prediction

Procedia PDF Downloads 306
17521 Detecting Paraphrases in Arabic Text

Authors: Amal Alshahrani, Allan Ramsay

Abstract:

Paraphrasing is one of the important tasks in natural language processing; i.e. alternative ways to express the same concept by using different words or phrases. Paraphrases can be used in many natural language applications, such as Information Retrieval, Machine Translation, Question Answering, Text Summarization, or Information Extraction. To obtain pairs of sentences that are paraphrases we create a system that automatically extracts paraphrases from a corpus, which is built from different sources of news article since these are likely to contain paraphrases when they report the same event on the same day. There are existing simple standard approaches (e.g. TF-IDF vector space, cosine similarity) and alignment technique (e.g. Dynamic Time Warping (DTW)) for extracting paraphrase which have been applied to the English. However, the performance of these approaches could be affected when they are applied to another language, for instance Arabic language, due to the presence of phenomena which are not present in English, such as Free Word Order, Zero copula, and Pro-dropping. These phenomena will affect the performance of these algorithms. Thus, if we can analysis how the existing algorithms for English fail for Arabic then we can find a solution for Arabic. The results are promising.

Keywords: natural language processing, TF-IDF, cosine similarity, dynamic time warping (DTW)

Procedia PDF Downloads 386
17520 The Impact of the Covid-19 Crisis on the Information Behavior in the B2B Buying Process

Authors: Stehr Melanie

Abstract:

The availability of apposite information is essential for the decision-making process of organizational buyers. Due to the constraints of the Covid-19 crisis, information channels that emphasize face-to-face contact (e.g. sales visits, trade shows) have been unavailable, and usage of digitally-driven information channels (e.g. videoconferencing, platforms) has skyrocketed. This paper explores the question in which areas the pandemic induced shift in the use of information channels could be sustainable and in which areas it is a temporary phenomenon. While information and buying behavior in B2C purchases has been regularly studied in the last decade, the last fundamental model of organizational buying behavior in B2B was introduced by Johnston and Lewin (1996) in times before the advent of the internet. Subsequently, research efforts in B2B marketing shifted from organizational buyers and their decision and information behavior to the business relationships between sellers and buyers. This study builds on the extensive literature on situational factors influencing organizational buying and information behavior and uses the economics of information theory as a theoretical framework. The research focuses on the German woodworking industry, which before the Covid-19 crisis was characterized by a rather low level of digitization of information channels. By focusing on an industry with traditional communication structures, a shift in information behavior induced by an exogenous shock is considered a ripe research setting. The study is exploratory in nature. The primary data source is 40 in-depth interviews based on the repertory-grid method. Thus, 120 typical buying situations in the woodworking industry and the information and channels relevant to them are identified. The results are combined into clusters, each of which shows similar information behavior in the procurement process. In the next step, the clusters are analyzed in terms of the post and pre-Covid-19 crisis’ behavior identifying stable and dynamic information behavior aspects. Initial results show that, for example, clusters representing search goods with low risk and complexity suggest a sustainable rise in the use of digitally-driven information channels. However, in clusters containing trust goods with high significance and novelty, an increased return to face-to-face information channels can be expected after the Covid-19 crisis. The results are interesting from both a scientific and a practical point of view. This study is one of the first to apply the economics of information theory to organizational buyers and their decision and information behavior in the digital information age. Especially the focus on the dynamic aspects of information behavior after an exogenous shock might contribute new impulses to theoretical debates related to the economics of information theory. For practitioners - especially suppliers’ marketing managers and intermediaries such as publishers or trade show organizers from the woodworking industry - the study shows wide-ranging starting points for a future-oriented segmentation of their marketing program by highlighting the dynamic and stable preferences of elaborated clusters in the choice of their information channels.

Keywords: B2B buying process, crisis, economics of information theory, information channel

Procedia PDF Downloads 184
17519 An Extended Inverse Pareto Distribution, with Applications

Authors: Abdel Hadi Ebraheim

Abstract:

This paper introduces a new extension of the Inverse Pareto distribution in the framework of Marshal-Olkin (1997) family of distributions. This model is capable of modeling various shapes of aging and failure data. The statistical properties of the new model are discussed. Several methods are used to estimate the parameters involved. Explicit expressions are derived for different types of moments of value in reliability analysis are obtained. Besides, the order statistics of samples from the new proposed model have been studied. Finally, the usefulness of the new model for modeling reliability data is illustrated using two real data sets with simulation study.

Keywords: pareto distribution, marshal-Olkin, reliability, hazard functions, moments, estimation

Procedia PDF Downloads 82
17518 Development of Advanced Linear Calibration Technique for Air Flow Sensing by Using CTA-Based Hot Wire Anemometry

Authors: Ming-Jong Tsai, T. M. Wu, R. C. Chu

Abstract:

The purpose of this study is to develop an Advanced linear calibration Technique for air flow sensing by using CTA-based Hot wire Anemometry. It contains a host PC with Human Machine Interface, a wind tunnel, a wind speed controller, an automatic data acquisition module, and nonlinear calibration model. To improve the fitting error by using single fitting polynomial, this study proposes a Multiple three-order Polynomial Fitting Method (MPFM) for fitting the non-linear output of a CTA-based Hot wire Anemometry. The CTA-based anemometer with built-in fitting parameters is installed in the wind tunnel, and the wind speed is controlled by the PC-based controller. The Hot-Wire anemometer's thermistor resistance change is converted into a voltage signal or temperature differences, and then sent to the PC through a DAQ card. After completion measurements of original signal, the Multiple polynomial mathematical coefficients can be automatically calculated, and then sent into the micro-processor in the Hot-Wire anemometer. Finally, the corrected Hot-Wire anemometer is verified for the linearity, the repeatability, error percentage, and the system outputs quality control reports.

Keywords: flow rate sensing, hot wire, constant temperature anemometry (CTA), linear calibration, multiple three-order polynomial fitting method (MPFM), temperature compensation

Procedia PDF Downloads 416
17517 Study of Sub-Surface Flow in an Unconfined Carbonate Aquifer in a Tropical Karst Area in Indonesia: A Modeling Approach Using Finite Difference Groundwater Model

Authors: Dua K. S. Y. Klaas, Monzur A. Imteaz, Ika Sudiayem, Elkan M. E. Klaas, Eldav C. M. Klaas

Abstract:

Due to its porous nature, karst terrains – geomorphologically developed from dissolved formations, is vulnerable to water shortage and deteriorated water quality. Therefore, a solid comprehension on sub-surface flow of karst landscape is essential to assess the long-term availability of groundwater resources. In this paper, a single-continuum model using a finite difference model, MODLFOW, was constructed to represent an unconfined carbonate aquifer in a tropical karst island of Rote in Indonesia. The model, spatially discretized in 20 x 20 m grid cells, was calibrated and validated using available groundwater level and atmospheric variables. In the calibration and validation steps, Parameter Estimation (PEST) and geostatistical pilot point methods were employed to estimate hydraulic conductivity and specific yield values. The results show that the model is able to represent the sub-surface flow indicated by good model performances both in calibration and validation steps. The final model can be used as a robust representation of the system for future study on climate and land use scenarios.

Keywords: carbonate aquifer, karst, sub-surface flow, groundwater model

Procedia PDF Downloads 148
17516 Social Media Retailing in the Creator Economy

Authors: Julianne Cai, Weili Xue, Yibin Wu

Abstract:

Social media retailing (SMR) platforms have become popular nowadays. It is characterized by a creative combination of content creation and product selling, which differs from traditional e-tailing (TE) with product selling alone. Motivated by real-world practices like social media platforms “TikTok” and douyin.com, we endeavor to study if the SMR model performs better than the TE model in a monopoly setting. By building a stylized economic model, we find that the SMR model does not always outperform the TE model. Specifically, when the SMR platform collects less commission from the seller than the TE platform, the seller, consumers, and social welfare all benefit more from the SMR model. In contrast, the platform benefits more from the SMR model if and only if the creator’s social influence is high enough or the cost of content creation is small enough. For the incentive structure of the content rewards in the SMR model, we found that a strong incentive mechanism (e.g., the quadratic form) is more powerful than a weak one (e.g., the linear form). The previous one will encourage the creator to choose a much higher quality level of content creation and meanwhile allowing the platform, consumers, and social welfare to become better off. Counterintuitively, providing more generous content rewards is not always helpful for the creator (seller), and it may reduce her profit. Our findings will guide the platform to effectively design incentive mechanisms to boost the content creation and retailing in the SMR model and help the influencers efficiently create content, engage their followers (fans), and price their products sold on the SMR platform.

Keywords: content creation, creator economy, incentive strategy, platform retailing

Procedia PDF Downloads 114
17515 The Effects of Placement and Cross-Section Shape of Shear Walls in Multi-Story RC Buildings with Plan Irregularity on Their Seismic Behavior by Using Nonlinear Time History Analyses

Authors: Mohammad Aminnia, Mahmood Hosseini

Abstract:

Environmental and functional conditions sometimes necessitate the architectural plan of the building to be asymmetric, and this result in an asymmetric structure. In such cases, finding an optimal pattern for locating the components of the lateral load bearing system, including shear walls, in the building’s plan is desired. In case of shear walls, in addition to the location, the shape of the wall cross-section is also an effective factor. Various types of shear wall and their proper layout might come effective in better stiffness distribution and more appropriate seismic response of the building. Several studies have been conducted in the context of analysis and design of shear walls; however, few studies have been performed on making decisions for the location and form of shear walls in multi-story buildings, especially those with irregular plan. In this study, an attempt has been made to obtain the most reliable seismic behavior of multi-story reinforced concrete vertically chamfered buildings by using more appropriate shear walls form and arrangement in 7-, 10-, 12-, and 15-story buildings. The considered forms and arrangements include common rectangular walls and L-, T-, U- and Z-shaped plan, located as the core or in the outer frames of the building structure. Comparison of seismic behaviors of the buildings, including maximum roof displacement, and particularly the formation of plastic hinges and their distribution in the buildings’ structures, have been done based on the results of a series of nonlinear time history analyses by using a set of selected earthquake records. Results show that shear walls with U-shaped cross-section, placed as the building central core, and also walls with Z-shaped cross-section, placed at the corners give the building more reliable seismic behavior.

Keywords: vertically chamfered buildings, non-linear time history analyses, l-, t-, u- and z-shaped plan walls

Procedia PDF Downloads 258
17514 Moving beyond the Social Model of Disability by Engaging in Anti-Oppressive Social Work Practice

Authors: Irene Carter, Roy Hanes, Judy MacDonald

Abstract:

Considering that disability is universal and people with disabilities are part of all societies; that there is a connection between the disabled individual and the societal; and that it is society and social arrangements that disable people with impairments, contemporary disability discourse emphasizes the social model of disability to counter medical and rehabilitative models of disability. However, the social model does not go far enough in addressing the issues of oppression and inclusion. The authors indicate that the social model does not specifically or adequately denote the oppression of persons with disabilities, which is a central component of progressive social work practice with people with disabilities. The social model of disability does not go far enough in deconstructing disability and offering social workers, as well as people with disabilities a way of moving forward in terms of practice anchored in individual, familial and societal change. The social model of disability is expanded by incorporating principles of anti-oppression social work practice. Although the contextual analysis of the social model of disability is an important component there remains a need for social workers to provide service to individuals and their families, which will be illustrated through anti-oppressive practice (AOP). By applying an anti-oppressive model of practice to the above definitions, the authors not only deconstruct disability paradigms but illustrate how AOP offers a framework for social workers to engage with people with disabilities at the individual, familial and community levels of practice, promoting an emancipatory focus in working with people with disabilities. An anti- social- oppression social work model of disability connects the day-to-day hardships of people with disabilities to the direct consequence of oppression in the form of ableism. AOP theory finds many of its basic concepts within social-oppression theory and the social model of disability. It is often the case that practitioners, including social workers and psychologists, define people with disabilities’ as having or being a problem with the focus placed upon adjustment and coping. A case example will be used to illustrate how an AOP paradigm offers social work a more comprehensive and critical analysis and practice model for social work practice with and for people with disabilities than the traditional medical model, rehabilitative and social model approaches.

Keywords: anti-oppressive practice, disability, people with disabilities, social model of disability

Procedia PDF Downloads 1083
17513 Evolving Software Assessment and Certification Models Using Ant Colony Optimization Algorithm

Authors: Saad M. Darwish

Abstract:

Recently, software quality issues have come to be seen as important subject as we see an enormous growth of agencies involved in software industries. However, these agencies cannot guarantee the quality of their products, thus leaving users in uncertainties. Software certification is the extension of quality by means that quality needs to be measured prior to certification granting process. This research participates in solving the problem of software assessment by proposing a model for assessment and certification of software product that uses a fuzzy inference engine to integrate both of process–driven and application-driven quality assurance strategies. The key idea of the on hand model is to improve the compactness and the interpretability of the model’s fuzzy rules via employing an ant colony optimization algorithm (ACO), which tries to find good rules description by dint of compound rules initially expressed with traditional single rules. The model has been tested by case study and the results have demonstrated feasibility and practicability of the model in a real environment.

Keywords: software quality, quality assurance, software certification model, software assessment

Procedia PDF Downloads 524
17512 Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network

Authors: Hui Wei, Zheng Dong

Abstract:

Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model.

Keywords: biological model, feature extraction, multi-layer neural network, object recognition

Procedia PDF Downloads 542
17511 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique

Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki

Abstract:

Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.

Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector

Procedia PDF Downloads 335
17510 Design and Control of a Knee Rehabilitation Device Using an MR-Fluid Brake

Authors: Mina Beheshti, Vida Shams, Mojtaba Esfandiari, Farzaneh Abdollahi, Abdolreza Ohadi

Abstract:

Most of the people who survive a stroke need rehabilitation tools to regain their mobility. The core function of these devices is a brake actuator. The goal of this study is to design and control a magnetorheological brake which can be used as a rehabilitation tool. In fact, the fluid used in this brake is called magnetorheological fluid or MR that properties can change by variation of the magnetic field. The braking properties can be set as control by using this feature of the fluid. In this research, different MR brake designs are first introduced in each design, and the dimensions of the brake have been determined based on the required torque for foot movement. To calculate the brake dimensions, it is assumed that the shear stress distribution in the fluid is uniform and the fluid is in its saturated state. After designing the rehabilitation brake, the mathematical model of the healthy movement of a healthy person is extracted. Due to the nonlinear nature of the system and its variability, various adaptive controllers, neural networks, and robust have been implemented to estimate the parameters and control the system. After calculating torque and control current, the best type of controller in terms of error and control current has been selected. Finally, this controller is implemented on the experimental data of the patient's movements, and the control current is calculated to achieve the desired torque and motion.

Keywords: rehabilitation, magnetorheological fluid, knee, brake, adaptive control, robust control, neural network control, torque control

Procedia PDF Downloads 151
17509 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model

Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim

Abstract:

Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).

Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph

Procedia PDF Downloads 280
17508 Existence Solutions for Three Point Boundary Value Problem for Differential Equations

Authors: Mohamed Houas, Maamar Benbachir

Abstract:

In this paper, under weak assumptions, we study the existence and uniqueness of solutions for a nonlinear fractional boundary value problem. New existence and uniqueness results are established using Banach contraction principle. Other existence results are obtained using scheafer and krasnoselskii's fixed point theorem. At the end, some illustrative examples are presented.

Keywords: caputo derivative, boundary value problem, fixed point theorem, local conditions

Procedia PDF Downloads 428
17507 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 455
17506 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.

Keywords: chimney, deterministic model, van der pol, vortex-induced vibration

Procedia PDF Downloads 221
17505 A ‘Just and Loving Gaze’ on Sexuality and Attachment: Why I Think (Not) All Homosexual Relationships are Borne Out of an Abandonment and Attachment Crisis

Authors: Victor Counted

Abstract:

John Bowlby's Attachment theory is often a framework used by many researchers to understand human relationship experiences with close 'others'. In this short brief on sexuality, I tried to discuss homosexual relationships from three attachment positions, or if you like, conditions, in relation to the compensation and correspondence hypothesis used to understand an individual's attachment orientation with an attachment figure who is seen as a secure base, safe haven, and some kind of target for proximity seeking. Drawing from the springs of virtue and hope in light of Murdock’s ‘just and love gaze’ model, I allowed myself to see the homosexual cases cited in positive terms, as I related to the situations and experiences of our homosexual ‘others’ from the guiding herald of Moltmann's theology of hope. This approach allowed me to conclusively convince readers to engage sexuality from a tolerating tendency of hope in our thinking and thoughts towards the actions and conditions of our dynamic world which is always plunging toward the future.

Keywords: attachment, wellbeing, sexuality, homosexuality, abandonment, tolerance of hope, wise fool

Procedia PDF Downloads 412
17504 Load Balancing Technique for Energy - Efficiency in Cloud Computing

Authors: Rani Danavath, V. B. Narsimha

Abstract:

Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.

Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission

Procedia PDF Downloads 449
17503 Strengthening Evaluation of Steel Girder Bridge under Load Rating Analysis: Case Study

Authors: Qudama Albu-Jasim, Majdi Kanaan

Abstract:

A case study about the load rating and strengthening evaluation of the six-span of steel girders bridge in Colton city of State of California is investigated. To simulate the load rating strengthening assessment for the Colton Overhead bridge, a three-dimensional finite element model built in the CSiBridge program is simulated. Three-dimensional finite-element models of the bridge are established considering the nonlinear behavior of critical bridge components to determine the feasibility and strengthening capacity under load rating analysis. The bridge was evaluated according to Caltrans Bridge Load Rating Manual 1st edition for rating the superstructure using the Load and Resistance Factor Rating (LRFR) method. The analysis for the bridge was based on load rating to determine the largest loads that can be safely placed on existing I-girder steel members and permitted to pass over the bridge. Through extensive numerical simulations, the bridge is identified to be deficient in flexural and shear capacities, and therefore strengthening for reducing the risk is needed. An in-depth parametric study is considered to evaluate the sensitivity of the bridge’s load rating response to variations in its structural parameters. The parametric analysis has exhibited that uncertainties associated with the steel’s yield strength, the superstructure’s weight, and the diaphragm configurations should be considered during the fragility analysis of the bridge system.

Keywords: load rating, CSIBridge, strengthening, uncertainties, case study

Procedia PDF Downloads 211
17502 Convertible Lease, Risky Debt and Financial Structure with Growth Option

Authors: Ons Triki, Fathi Abid

Abstract:

The basic objective of this paper is twofold. It resides in designing a model for a contingent convertible lease contract that can ensure the financial stability of a company and recover the losses of the parties to the lease in the event of default. It also aims to compare the convertible lease contract on inefficiencies resulting from the debt-overhang problem and asset substitution with other financing policies. From this perspective, this paper highlights the interaction between investments and financing policies in a dynamic model with existing assets and a growth option where the investment cost is financed by a contingent convertible lease and equity. We explore the impact of the contingent convertible lease on the capital structure. We also check the reliability and effectiveness of the use of the convertible lease contract as a means of financing. Findings show that the rental convertible contract with a sufficiently high conversion ratio has less severe inefficiencies arising from risk-shifting and debt overhang than those entailed by risky debt and pure-equity financing. The problem of underinvestment pointed out by Mauer and Ott (2000) and the problem of overinvestment mentioned by Hackbarth and Mauer (2012) may be reduced under contingent convertible lease financing. Our findings predict that the firm value under contingent convertible lease financing increases globally with asset volatility instead of decreasing with business risk. The study reveals that convertible leasing contracts can stand for a reliable solution to ensure the lessee and quickly recover the counterparties of the lease upon default.

Keywords: contingent convertible lease, growth option, debt overhang, risk-shifting, capital structure

Procedia PDF Downloads 72
17501 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 228