Search results for: linear mixed-effects model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18627

Search results for: linear mixed-effects model

17397 Discrete State Prediction Algorithm Design with Self Performance Enhancement Capacity

Authors: Smail Tigani, Mohamed Ouzzif

Abstract:

This work presents a discrete quantitative state prediction algorithm with intelligent behavior making it able to self-improve some performance aspects. The specificity of this algorithm is the capacity of self-rectification of the prediction strategy before the final decision. The auto-rectification mechanism is based on two parallel mathematical models. In one hand, the algorithm predicts the next state based on event transition matrix updated after each observation. In the other hand, the algorithm extracts its residues trend with a linear regression representing historical residues data-points in order to rectify the first decision if needs. For a normal distribution, the interactivity between the two models allows the algorithm to self-optimize its performance and then make better prediction. Designed key performance indicator, computed during a Monte Carlo simulation, shows the advantages of the proposed approach compared with traditional one.

Keywords: discrete state, Markov Chains, linear regression, auto-adaptive systems, decision making, Monte Carlo Simulation

Procedia PDF Downloads 492
17396 Reducing Hazardous Materials Releases from Railroad Freights through Dynamic Trip Plan Policy

Authors: Omar A. Abuobidalla, Mingyuan Chen, Satyaveer S. Chauhan

Abstract:

Railroad transportation of hazardous materials freights is important to the North America economics that supports the national’s supply chain. This paper introduces various extensions of the dynamic hazardous materials trip plan problems. The problem captures most of the operational features of a real-world railroad transportations systems that dynamically initiates a set of blocks and assigns each shipment to a single block path or multiple block paths. The dynamic hazardous materials trip plan policies have distinguishing features that are integrating the blocking plan, and the block activation decisions. We also present a non-linear mixed integer programming formulation for each variant and present managerial insights based on a hypothetical railroad network. The computation results reveal that the dynamic car scheduling policies are not only able to take advantage of the capacity of the network but also capable of diminishing the population, and environment risks by rerouting the active blocks along the least risky train services without sacrificing the cost advantage of the railroad. The empirical results of this research illustrate that the issue of integrating the blocking plan, and the train makeup of the hazardous materials freights must receive closer attentions.

Keywords: dynamic car scheduling, planning and scheduling hazardous materials freights, airborne hazardous materials, gaussian plume model, integrated blocking and routing plans, box model

Procedia PDF Downloads 203
17395 Speech Recognition Performance by Adults: A Proposal for a Battery for Marathi

Authors: S. B. Rathna Kumar, Pranjali A Ujwane, Panchanan Mohanty

Abstract:

The present study aimed to develop a battery for assessing speech recognition performance by adults in Marathi. A total of four word lists were developed by considering word frequency, word familiarity, words in common use, and phonemic balance. Each word list consists of 25 words (15 monosyllabic words in CVC structure and 10 monosyllabic words in CVCV structure). Equivalence analysis and performance-intensity function testing was carried using the four word lists on a total of 150 native speakers of Marathi belonging to different regions of Maharashtra (Vidarbha, Marathwada, Khandesh and Northern Maharashtra, Pune, and Konkan). The subjects were further equally divided into five groups based on above mentioned regions. It was found that there was no significant difference (p > 0.05) in the speech recognition performance between groups for each word list and between word lists for each group. Hence, the four word lists developed were equally difficult for all the groups and can be used interchangeably. The performance-intensity (PI) function curve showed semi-linear function, and the groups’ mean slope of the linear portions of the curve indicated an average linear slope of 4.64%, 4.73%, 4.68%, and 4.85% increase in word recognition score per dB for list 1, list 2, list 3 and list 4 respectively. Although, there is no data available on speech recognition tests for adults in Marathi, most of the findings of the study are in line with the findings of research reports on other languages. The four word lists, thus developed, were found to have sufficient reliability and validity in assessing speech recognition performance by adults in Marathi.

Keywords: speech recognition performance, phonemic balance, equivalence analysis, performance-intensity function testing, reliability, validity

Procedia PDF Downloads 353
17394 Development of an Image-Based Biomechanical Model for Assessment of Hip Fracture Risk

Authors: Masoud Nasiri Sarvi, Yunhua Luo

Abstract:

Low-trauma hip fracture, usually caused by fall from standing height, has become a main source of morbidity and mortality for the elderly. Factors affecting hip fracture include sex, race, age, body weight, height, body mass distribution, etc., and thus, hip fracture risk in fall differs widely from subject to subject. It is therefore necessary to develop a subject-specific biomechanical model to predict hip fracture risk. The objective of this study is to develop a two-level, image-based, subject-specific biomechanical model consisting of a whole-body dynamics model and a proximal-femur finite element (FE) model for more accurately assessing the risk of hip fracture in lateral falls. Required information for constructing the model is extracted from a whole-body and a hip DXA (Dual Energy X-ray Absorptiometry) image of the subject. The proposed model considers all parameters subject-specifically, which will provide a fast, accurate, and non-expensive method for predicting hip fracture risk.

Keywords: bone mineral density, hip fracture risk, impact force, sideways falls

Procedia PDF Downloads 533
17393 Linear Frequency Modulation-Frequency Shift Keying Radar with Compressive Sensing

Authors: Ho Jeong Jin, Chang Won Seo, Choon Sik Cho, Bong Yong Choi, Kwang Kyun Na, Sang Rok Lee

Abstract:

In this paper, a radar signal processing technique using the LFM-FSK (Linear Frequency Modulation-Frequency Shift Keying) is proposed for reducing the false alarm rate based on the compressive sensing. The LFM-FSK method combines FMCW (Frequency Modulation Continuous Wave) signal with FSK (Frequency Shift Keying). This shows an advantage which can suppress the ghost phenomenon without the complicated CFAR (Constant False Alarm Rate) algorithm. Moreover, the parametric sparse algorithm applying the compressive sensing that restores signals efficiently with respect to the incomplete data samples is also integrated, leading to reducing the burden of ADC in the receiver of radars. 24 GHz FMCW signal is applied and tested in the real environment with FSK modulated data for verifying the proposed algorithm along with the compressive sensing.

Keywords: compressive sensing, LFM-FSK radar, radar signal processing, sparse algorithm

Procedia PDF Downloads 470
17392 Physical Education Teacher's Interpretation toward Teaching Games for Understanding Model

Authors: Soni Nopembri

Abstract:

The objective of this research is to evaluate the implementation of teaching games for Understanding model by conducting action to physical education teacher who have got long teaching experience. The research applied Participatory Action Research. The subjects of this research were 19 physical education teachers who had got training of Teaching Games for Understanding. Data collection was conducted intensively through a questionnaire, in-depth interview, Focus Group Discussion (FGD), observation, and documentation. The collected data was analysis zed qualitatively and quantitatively. The result showed that physical education teachers had got an appropriate interpretation on TGfU model. Some indicators that were the focus of this research indicated this points; they are: (1) physical education teachers had good understanding toward TGfU model, (2) PE teachers’ competence in applying TGfU model on Physical Education at school were adequate, though some improvement were needed, (3) the influence factors in the implementation of TGfU model, in sequence, were teacher, facilities, environment, and students factors, (4) PE teachers’ perspective toward TGfU model were positively good, although some teachers were less optimistic toward the development of TGfU model in the future.

Keywords: TGfU, physical education teacher, teaching games, FGD

Procedia PDF Downloads 541
17391 VTOL-Fw Mode-Transitioning UAV Design and Analysis

Authors: Feri̇t Çakici, M. Kemal Leblebi̇ci̇oğlu

Abstract:

In this study, an unmanned aerial vehicle (UAV) with level flight, vertical take-off and landing (VTOL) and mode-transitioning capability is designed and analyzed. The platform design combines both multirotor and fixed-wing (FW) conventional airplane structures and control surfaces; therefore named as VTOL-FW. The aircraft is modeled using aerodynamical principles and linear models are constructed utilizing small perturbation theory for trim conditions. The proposed method of control includes implementation of multirotor and airplane mode controllers and design of an algorithm to transition between modes in achieving smooth switching maneuvers between VTOL and FW flight. Thus, VTOL-FW UAV’s flight characteristics are expected to be improved by enlarging operational flight envelope through enabling mode-transitioning, agile maneuvers and increasing survivability. Experiments conducted in simulation and real world environments shows that VTOL-FW UAV has both multirotor and airplane characteristics with extra benefits in an enlarged flight envelope.

Keywords: aircraft design, linear analysis, mode transitioning control, UAV

Procedia PDF Downloads 390
17390 Interpretation of Ultrasonic Backscatter of Linear FM Chirp Pulses from Targets Having Frequency-Dependent Scattering

Authors: Stuart Bradley, Mathew Legg, Lilyan Panton

Abstract:

Ultrasonic remote sensing is a useful tool for assessing the interior structure of complex targets. For these methods, significantly enhanced spatial resolution is obtained if the pulse is coded, for example using a linearly changing frequency during the pulse duration. Such pulses have a time-dependent spectral structure. Interpretation of the backscatter from targets is, therefore, complicated if the scattering is frequency-dependent. While analytic models are well established for steady sinusoidal excitations applied to simple shapes such as spheres, such models do not generally exist for temporally evolving excitations. Therefore, models are developed in the current paper for handling such signals so that the properties of the targets can be quantitatively evaluated while maintaining very high spatial resolution. Laboratory measurements on simple shapes are used to confirm the validity of the models.

Keywords: linear FM chirp, time-dependent acoustic scattering, ultrasonic remote sensing, ultrasonic scattering

Procedia PDF Downloads 312
17389 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 24
17388 Influence of the Low Frequency Ultrasound on the Cadmium (II) Biosorption by an Ecofriendly Biocomposite (Extraction Solid Waste of Ammi visnaga / Calcium Alginate): Kinetic Modeling

Authors: L. Nouri Taiba, Y. Bouhamidi, F. Kaouah, Z. Bendjama, M. Trari

Abstract:

In the present study, an ecofriendly biocomposite namely calcium alginate immobilized Ammi Visnaga (Khella) extraction waste (SWAV/CA) was prepared by electrostatic extrusion method and used on the cadmium biosorption from aqueous phase with and without the assistance of ultrasound in batch conditions. The influence of low frequency ultrasound (37 and 80 KHz) on the cadmium biosorption kinetics was studied. The obtained results show that the ultrasonic irradiation significantly enhances and improves the efficiency of the cadmium removal. The Pseudo first order, Pseudo-second-order, Intraparticle diffusion, and Elovich models were evaluated using the non-linear curve fitting analysis method. Modeling of kinetic results shows that biosorption process is best described by the pseudo-second order and Elovich, in both the absence and presence of ultrasound.

Keywords: biocomposite, biosorption, cadmium, non-linear analysis, ultrasound

Procedia PDF Downloads 274
17387 Geomechanical Numerical Modeling of Well Wall in Drilling with Finite Difference Method

Authors: Marzieh Zarei

Abstract:

Well instability is one of the most fundamental challenges faced by the oil and gas industry. Well wall stability analysis is a gap to be filled in the oil industry. The collection of static data such as well logging leads to the construction of a geomechanical numerical model, which will help in assessing the probable risks in future drilling. In this paper, geomechanical model was designed, and mechanical properties of the rock was determined at all points of the model. It was found the safe mud window was determined and the minimum and maximum mud pressures were determined in the ranges of 70-60 MPa and 110-100 MPa, respectively.

Keywords: geomechanics, numerical model, well stability, in-situ stress, underbalanced drilling

Procedia PDF Downloads 120
17386 Design and Analysis of Shielding Magnetic Field for Active Space Radiation Protection

Authors: Chaoyan Huang, Hongxia Zheng

Abstract:

For deep space exploration and long duration interplanetary manned missions, protection of astronauts from cosmic radiation is an unavoidable problem. However, passive shielding can be little effective for protecting particles which energies are greater than 1GeV/nucleon. In this study, active magnetic protection method is adopted. Taking into account the structure and size of the end-cap, eight shielding magnetic field configurations are designed based on the Hoffman configuration. The shielding effect of shielding magnetic field structure, intensity B and thickness L on H particles with 2GeV energy is compared by test particle simulation. The result shows that the shielding effect is better with the linear type magnetic field structure in the end-cap region. Furthermore, two magnetic field configurations with better shielding effect are investigated through H and He galactic cosmic spectra. And the shielding effect of the linear type configuration adopted in the barrel and end-cap regions is best.

Keywords: galactic cosmic rays, active protection, shielding magnetic field configuration, shielding effect

Procedia PDF Downloads 140
17385 Study of the Protection of Induction Motors

Authors: Bencheikh Abdellah

Abstract:

In this paper, we present a mathematical model dedicated to the simulation breaks bars in a three-phase cage induction motor. This model is based on a mesh circuit representing the rotor cage. The tested simulation allowed us to demonstrate the effectiveness of this model to describe the behavior of the machine in a healthy state, failure.

Keywords: AC motors, squirrel cage, diagnostics, MATLAB, SIMULINK

Procedia PDF Downloads 432
17384 Dynamic Model of Heterogeneous Markets with Imperfect Information for the Optimization of Company's Long-Time Strategy

Authors: Oleg Oborin

Abstract:

This paper is dedicated to the development of the model, which can be used to evaluate the effectiveness of long-term corporate strategies and identify the best strategies. The theoretical model of the relatively homogenous product market (such as iron and steel industry, mobile services or road transport) has been developed. In the model, the market consists of a large number of companies with different internal characteristics and objectives. The companies can perform mergers and acquisitions in order to increase their market share. The model allows the simulation of long-time dynamics of the market (for a period longer than 20 years). Therefore, a large number of simulations on random input data was conducted in the framework of the model. After that, the results of the model were compared with the dynamics of real markets, such as the US steel industry from the beginning of the XX century to the present day, and the market of mobile services in Germany for the period between 1990 and 2015.

Keywords: Economic Modelling, Long-Time Strategy, Mergers and Acquisitions, Simulation

Procedia PDF Downloads 364
17383 Life Prediction Method of Lithium-Ion Battery Based on Grey Support Vector Machines

Authors: Xiaogang Li, Jieqiong Miao

Abstract:

As for the problem of the grey forecasting model prediction accuracy is low, an improved grey prediction model is put forward. Firstly, use trigonometric function transform the original data sequence in order to improve the smoothness of data , this model called SGM( smoothness of grey prediction model), then combine the improved grey model with support vector machine , and put forward the grey support vector machine model (SGM - SVM).Before the establishment of the model, we use trigonometric functions and accumulation generation operation preprocessing data in order to enhance the smoothness of the data and weaken the randomness of the data, then use support vector machine (SVM) to establish a prediction model for pre-processed data and select model parameters using genetic algorithms to obtain the optimum value of the global search. Finally, restore data through the "regressive generate" operation to get forecasting data. In order to prove that the SGM-SVM model is superior to other models, we select the battery life data from calce. The presented model is used to predict life of battery and the predicted result was compared with that of grey model and support vector machines.For a more intuitive comparison of the three models, this paper presents root mean square error of this three different models .The results show that the effect of grey support vector machine (SGM-SVM) to predict life is optimal, and the root mean square error is only 3.18%. Keywords: grey forecasting model, trigonometric function, support vector machine, genetic algorithms, root mean square error

Keywords: Grey prediction model, trigonometric functions, support vector machines, genetic algorithms, root mean square error

Procedia PDF Downloads 456
17382 The Role of Team Efficacy and Coaching on the Relationships between Distributive and Procedural Justice and Job Engagement

Authors: Yoonhee Cho, Gye-Hoon Hong

Abstract:

This study focuses on the roles of distributive and procedural justice on job engagement. Additionally, the study focuses on whether situational factors such as team efficacy and team leaders’ coaching moderate the relationship between distributive and procedural justice and job engagement. Ordinary linear regression was used to analyze data from seven South Korean Companies (total N=346). Results confirmed the hypothesized model indicating that both distributive and procedural justices were positively related to job engagement of employees. Team efficacy and team leaders’ coaching moderated the relationship between distributive justice and job engagement whereas it brought non-significant result found for procedural justice. The facts that two types of justice and the interactive effects of two situational variables were different implied that different managerial strategies should be used when job engagement was to be enhanced.

Keywords: coaching, distributive justice, job engagement, procedural justice, team efficacy

Procedia PDF Downloads 549
17381 Performance Evaluation of Task Scheduling Algorithm on LCQ Network

Authors: Zaki Ahmad Khan, Jamshed Siddiqui, Abdus Samad

Abstract:

The Scheduling and mapping of tasks on a set of processors is considered as a critical problem in parallel and distributed computing system. This paper deals with the problem of dynamic scheduling on a special type of multiprocessor architecture known as Linear Crossed Cube (LCQ) network. This proposed multiprocessor is a hybrid network which combines the features of both linear type of architectures as well as cube based architectures. Two standard dynamic scheduling schemes namely Minimum Distance Scheduling (MDS) and Two Round Scheduling (TRS) schemes are implemented on the LCQ network. Parallel tasks are mapped and the imbalance of load is evaluated on different set of processors in LCQ network. The simulations results are evaluated and effort is made by means of through analysis of the results to obtain the best solution for the given network in term of load imbalance left and execution time. The other performance matrices like speedup and efficiency are also evaluated with the given dynamic algorithms.

Keywords: dynamic algorithm, load imbalance, mapping, task scheduling

Procedia PDF Downloads 447
17380 Modelling Export Dynamics in the CSEE Countries Using GVAR Model

Authors: S. Jakšić, B. Žmuk

Abstract:

The paper investigates the key factors of export dynamics for a set of Central and Southeast European (CSEE) countries in the context of current economic and financial crisis. In order to model the export dynamics a Global Vector Auto Regressive (GVAR) model is defined. As opposed to models which model each country separately, the GVAR combines all country models in a global model which enables obtaining important information on spill-over effects in the context of globalization and rising international linkages. The results of the study indicate that for most of the CSEE countries, exports are mainly driven by domestic shocks, both in the short run and in the long run. This study is the first application of the GVAR model to studying the export dynamics in the CSEE countries and therefore the results of the study present an important empirical contribution.

Keywords: export, GFEVD, global VAR, international trade, weak exogeneity

Procedia PDF Downloads 295
17379 Control of Spherical Robot with Sliding Mode

Authors: Roya Khajepour, Alireza B. Novinzadeh

Abstract:

A major issue with spherical robot is it surface shape, which is not always predictable. This means that given only the dynamic model of the robot, it is not possible to control the robot. Due to the fact that in certain conditions it is not possible to measure surface friction, control methods must be prepared for these conditions. Moreover, although spherical robot never becomes unstable or topples thanks to its special shape, since it moves by rolling it has a non-holonomic constraint at point of contact and therefore it is considered a non-holonomic system. Existence of such a point leads to complexity and non-linearity of robot's kinematic equations and makes the control problem difficult. Due to the non-linear dynamics and presence of uncertainty, the sliding-mode control is employed. The proposed method is based on Lyapunov Theory and guarantees system stability. This controller is insusceptible to external disturbances and un-modeled dynamics.

Keywords: sliding mode, spherical robot, non-holomonic constraint, system stability

Procedia PDF Downloads 385
17378 Graphen-Based Nanocomposites for Glucose and Ethanol Enzymatic Biosensor Fabrication

Authors: Tesfaye Alamirew, Delele Worku, Solomon W. Fanta, Nigus Gabbiye

Abstract:

Recently graphen based nanocomposites are become an emerging research areas for fabrication of enzymatic biosensors due to their property of large surface area, conductivity and biocompatibility. This review summarizes recent research reports of graphen based nanocomposites for the fabrication of glucose and ethanol enzymatic biosensors. The newly fabricated enzyme free microwave treated nitrogen doped graphen (MN-d-GR) had provided highest sensitivity towards glucose and GCE/rGO/AuNPs/ADH composite had provided far highest sensitivity towards ethanol compared to other reported graphen based nanocomposites. The MWCNT/GO/GOx and GCE/ErGO/PTH/ADH nanocomposites had also enhanced wide linear range for glucose and ethanol detection respectively. Generally, graphen based nanocomposite enzymatic biosensors had fast direct electron transfer rate, highest sensitivity and wide linear detection ranges during glucose and ethanol sensing.

Keywords: glucose, ethanol, enzymatic biosensor, graphen, nanocomposite

Procedia PDF Downloads 121
17377 Simplified 3R2C Building Thermal Network Model: A Case Study

Authors: S. M. Mahbobur Rahman

Abstract:

Whole building energy simulation models are widely used for predicting future energy consumption, performance diagnosis and optimum control.  Black box building energy modeling approach has been heavily studied in the past decade. The thermal response of a building can also be modeled using a network of interconnected resistors (R) and capacitors (C) at each node called R-C network. In this study, a model building, Case 600, as described in the “Standard Method of Test for the Evaluation of Building Energy Analysis Computer Program”, ASHRAE standard 140, is studied along with a 3R2C thermal network model and the ASHRAE clear sky solar radiation model. Although building an energy model involves two important parts of building component i.e., the envelope and internal mass, the effect of building internal mass is not considered in this study. All the characteristic parameters of the building envelope are evaluated as on Case 600. Finally, monthly building energy consumption from the thermal network model is compared with a simple-box energy model within reasonable accuracy. From the results, 0.6-9.4% variation of monthly energy consumption is observed because of the south-facing windows.

Keywords: ASHRAE case study, clear sky solar radiation model, energy modeling, thermal network model

Procedia PDF Downloads 140
17376 Development of a Model for Predicting Radiological Risks in Interventional Cardiology

Authors: Stefaan Carpentier, Aya Al Masri, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis, and ulceration to appear. In order to prevent these deterministic effects, a prediction of the peak skin-dose for the patient is important in order to improve the post-operative care to be given to the patient. The objective of this study is to estimate, before the intervention, the patient dose for ‘Chronic Total Occlusion (CTO)’ procedures by selecting relevant clinical indicators. Materials and methods: 103 procedures were performed in the ‘Interventional Cardiology (IC)’ department using a Siemens Artis Zee image intensifier that provides the Air Kerma of each IC exam. Peak Skin Dose (PSD) was measured for each procedure using radiochromic films. Patient parameters such as sex, age, weight, and height were recorded. The complexity index J-CTO score, specific to each intervention, was determined by the cardiologist. A correlation method applied to these indicators allowed to specify their influence on the dose. A predictive model of the dose was created using multiple linear regressions. Results: Out of 103 patients involved in the study, 5 were excluded for clinical reasons and 2 for placement of radiochromic films outside the exposure field. 96 2D-dose maps were finally used. The influencing factors having the highest correlation with the PSD are the patient's diameter and the J-CTO score. The predictive model is based on these parameters. The comparison between estimated and measured skin doses shows an average difference of 0.85 ± 0.55 Gy for doses of less than 6 Gy. The mean difference between air-Kerma and PSD is 1.66 Gy ± 1.16 Gy. Conclusion: Using our developed method, a first estimate of the dose to the skin of the patient is available before the start of the procedure, which helps the cardiologist in carrying out its intervention. This estimation is more accurate than that provided by the Air-Kerma.

Keywords: chronic total occlusion procedures, clinical experimentation, interventional radiology, patient's peak skin dose

Procedia PDF Downloads 132
17375 Maturity Model for Agro-Industrial Logistics

Authors: Erika Tatiana Ruiz, Wilson Adarme Jaimes

Abstract:

This abstract presents the methodology for improving the logistics processes of agricultural production units belonging to the coffee, cocoa, and fruit sectors, starting from the fundamental concepts and detailing each of the phases to carry out the diagnosis, which will be the basis for the formulation of its action plan and implementation of the maturity model. As a result of this work, the maturity model is formulated to improve logistics processes. This model seeks to: generate a progressive model that is useful for all productive units belonging to these sectors at the national level, regardless of their initial conditions, focus on the improvement of logistics processes as a strategy that contributes to improving the competitiveness of the agricultural sector in Colombia and spread the implementation of good logistics practices in postharvest in all departments of the country through autonomous tools. This model has been built through a series of steps that allow the evaluation and improvement of the logistics dimensions or indicators. The potential improvements for each dimension provide the foundation on which to advance to the next level. Within the maturity model, a methodology is indicated for the design and execution of strategies to improve its logistics processes, taking into account the current state of each production unit.

Keywords: agroindustrial, characterization, logistics, maturity model, processes

Procedia PDF Downloads 135
17374 Thermal and Geometric Effects on Nonlinear Response of Incompressible Hyperelastic Cylindrical Shells

Authors: Morteza Shayan Arani, Mohammadamin Esmailzadehazimi, Mohammadreza Moeini, Mohammad Toorani, Aouni A. Lakis

Abstract:

This paper investigates the nonlinear response of thin, incompressible, hyperelastic cylindrical shells in the presence of a time-varying temperature field while considering initial geometric imperfections. The governing equations of motion are derived using an improved Donnell's shallow shell theory. The hyperelastic material is modeled using the Mooney-Rivlin model with two parameters, incorporating temperature-dependent terms. The Lagrangian method is applied to obtain the equation of motion. The resulting governing equation is addressed through the Lindstedt-Poincaré and Multiple Scale methods. The linear and nonlinear models presented in this study are verified against existing open literature, demonstrating the accuracy and reliability of the presented model. The study focuses on understanding the influence of temperature variations and geometrical imperfections on the natural frequency and amplitude-frequency response of the systems. Notably, the investigation reveals the coexistence of hardening and softening peaks in the amplitude-frequency response, which vary in magnitude depending on these parameters. Additionally, resonance peaks exhibit changes as a result of temperature and geometric imperfections.

Keywords: hyperelastic material, cylindrical shell, geometrical nonlinearity, material naolinearity, initial geometric imperfection, temperature gradient, hardening and softening

Procedia PDF Downloads 68
17373 Approximation of Convex Set by Compactly Semidefinite Representable Set

Authors: Anusuya Ghosh, Vishnu Narayanan

Abstract:

The approximation of convex set by semidefinite representable set plays an important role in semidefinite programming, especially in modern convex optimization. To optimize a linear function over a convex set is a hard problem. But optimizing the linear function over the semidefinite representable set which approximates the convex set is easy to solve as there exists numerous efficient algorithms to solve semidefinite programming problems. So, our approximation technique is significant in optimization. We develop a technique to approximate any closed convex set, say K by compactly semidefinite representable set. Further we prove that there exists a sequence of compactly semidefinite representable sets which give tighter approximation of the closed convex set, K gradually. We discuss about the convergence of the sequence of compactly semidefinite representable sets to closed convex set K. The recession cone of K and the recession cone of the compactly semidefinite representable set are equal. So, we say that the sequence of compactly semidefinite representable sets converge strongly to the closed convex set. Thus, this approximation technique is very useful development in semidefinite programming.

Keywords: semidefinite programming, semidefinite representable set, compactly semidefinite representable set, approximation

Procedia PDF Downloads 377
17372 Starlink Satellite Collision Probability Simulation Based on Simplified Geometry Model

Authors: Toby Li, Julian Zhu

Abstract:

In this paper, a model based on a simplified geometry is introduced to give a very conservative collision probability prediction for the Starlink satellite in its most densely clustered region. Under the model in this paper, the probability of collision for Starlink satellite where it clustered most densely is found to be 8.484 ∗ 10^−4. It is found that the predicted collision probability increased nonlinearly with the increased safety distance set. This simple model provides evidence that the continuous development of maneuver avoidance systems is necessary for the future of the orbital safety of satellites under the harsher Lower Earth Orbit environment.

Keywords: Starlink, collision probability, debris, geometry model

Procedia PDF Downloads 74
17371 Development of Coastal Inundation–Inland and River Flow Interface Module Based on 2D Hydrodynamic Model

Authors: Eun-Taek Sin, Hyun-Ju Jang, Chang Geun Song, Yong-Sik Han

Abstract:

Due to the climate change, the coastal urban area repeatedly suffers from the loss of property and life by flooding. There are three main causes of inland submergence. First, when heavy rain with high intensity occurs, the water quantity in inland cannot be drained into rivers by increase in impervious surface of the land development and defect of the pump, storm sewer. Second, river inundation occurs then water surface level surpasses the top of levee. Finally, Coastal inundation occurs due to rising sea water. However, previous studies ignored the complex mechanism of flooding, and showed discrepancy and inadequacy due to linear summation of each analysis result. In this study, inland flooding and river inundation were analyzed together by HDM-2D model. Petrov-Galerkin stabilizing method and flux-blocking algorithm were applied to simulate the inland flooding. In addition, sink/source terms with exponentially growth rate attribute were added to the shallow water equations to include the inland flooding analysis module. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. To consider the coastal surge, another module was developed by adding seawater to the existing Inland Flooding-River Inundation binding module for comprehensive flooding analysis. Based on the combined modules, the Coastal Inundation – Inland & River Flow Interface was simulated by inputting the flow rate and depth data in artificial flume. Accordingly, it was able to analyze the flood patterns of coastal cities over time. This study is expected to help identify the complex causes of flooding in coastal areas where complex flooding occurs, and assist in analyzing damage to coastal cities. Acknowledgements—This research was supported by a grant ‘Development of the Evaluation Technology for Complex Causes of Inundation Vulnerability and the Response Plans in Coastal Urban Areas for Adaptation to Climate Change’ [MPSS-NH-2015-77] from the Natural Hazard Mitigation Research Group, Ministry of Public Safety and Security of Korea.

Keywords: flooding analysis, river inundation, inland flooding, 2D hydrodynamic model

Procedia PDF Downloads 359
17370 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 388
17369 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating

Procedia PDF Downloads 191
17368 Modeling and Validation of Microspheres Generation in the Modified T-Junction Device

Authors: Lei Lei, Hongbo Zhang, Donald J. Bergstrom, Bing Zhang, K. Y. Song, W. J. Zhang

Abstract:

This paper presents a model for a modified T-junction device for microspheres generation. The numerical model is developed using a commercial software package: COMSOL Multiphysics. In order to test the accuracy of the numerical model, multiple variables, such as the flow rate of cross-flow, fluid properties, structure, and geometry of the microdevice are applied. The results from the model are compared with the experimental results in the diameter of the microsphere generated. The comparison shows a good agreement. Therefore the model is useful in further optimization of the device and feedback control of microsphere generation if any.

Keywords: CFD modeling, validation, microsphere generation, modified T-junction

Procedia PDF Downloads 699