Search results for: optimized summarization models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8070

Search results for: optimized summarization models

7200 An Improved Prediction Model of Ozone Concentration Time Series Based on Chaotic Approach

Authors: Nor Zila Abd Hamid, Mohd Salmi M. Noorani

Abstract:

This study is focused on the development of prediction models of the Ozone concentration time series. Prediction model is built based on chaotic approach. Firstly, the chaotic nature of the time series is detected by means of phase space plot and the Cao method. Then, the prediction model is built and the local linear approximation method is used for the forecasting purposes. Traditional prediction of autoregressive linear model is also built. Moreover, an improvement in local linear approximation method is also performed. Prediction models are applied to the hourly ozone time series observed at the benchmark station in Malaysia. Comparison of all models through the calculation of mean absolute error, root mean squared error and correlation coefficient shows that the one with improved prediction method is the best. Thus, chaotic approach is a good approach to be used to develop a prediction model for the Ozone concentration time series.

Keywords: chaotic approach, phase space, Cao method, local linear approximation method

Procedia PDF Downloads 325
7199 Data Collection with Bounded-Sized Messages in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In this paper, we study the data collection problem in Wireless Sensor Networks (WSNs) adopting the two interference models: The graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR). The main issue of the problem is to compute schedules with the minimum number of timeslots, that is, to compute the minimum latency schedules, such that data from every node can be collected without any collision or interference to a sink node. While existing works studied the problem with unit-sized and unbounded-sized message models, we investigate the problem with the bounded-sized message model, and introduce a constant factor approximation algorithm. To the best known of our knowledge, our result is the first result of the data collection problem with bounded-sized model in both interference models.

Keywords: data collection, collision-free, interference-free, physical interference model, SINR, approximation, bounded-sized message model, wireless sensor networks

Procedia PDF Downloads 216
7198 Switched System Diagnosis Based on Intelligent State Filtering with Unknown Models

Authors: Nada Slimane, Foued Theljani, Faouzi Bouani

Abstract:

The paper addresses the problem of fault diagnosis for systems operating in several modes (normal or faulty) based on states assessment. We use, for this purpose, a methodology consisting of three main processes: 1) sequential data clustering, 2) linear model regression and 3) state filtering. Typically, Kalman Filter (KF) is an algorithm that provides estimation of unknown states using a sequence of I/O measurements. Inevitably, although it is an efficient technique for state estimation, it presents two main weaknesses. First, it merely predicts states without being able to isolate/classify them according to their different operating modes, whether normal or faulty modes. To deal with this dilemma, the KF is endowed with an extra clustering step based fully on sequential version of the k-means algorithm. Second, to provide state estimation, KF requires state space models, which can be unknown. A linear regularized regression is used to identify the required models. To prove its effectiveness, the proposed approach is assessed on a simulated benchmark.

Keywords: clustering, diagnosis, Kalman Filtering, k-means, regularized regression

Procedia PDF Downloads 177
7197 Application Methodology for the Generation of 3D Thermal Models Using UAV Photogrammety and Dual Sensors for Mining/Industrial Facilities Inspection

Authors: Javier Sedano-Cibrián, Julio Manuel de Luis-Ruiz, Rubén Pérez-Álvarez, Raúl Pereda-García, Beatriz Malagón-Picón

Abstract:

Structural inspection activities are necessary to ensure the correct functioning of infrastructures. Unmanned Aerial Vehicle (UAV) techniques have become more popular than traditional techniques. Specifically, UAV Photogrammetry allows time and cost savings. The development of this technology has permitted the use of low-cost thermal sensors in UAVs. The representation of 3D thermal models with this type of equipment is in continuous evolution. The direct processing of thermal images usually leads to errors and inaccurate results. A methodology is proposed for the generation of 3D thermal models using dual sensors, which involves the application of visible Red-Blue-Green (RGB) and thermal images in parallel. Hence, the RGB images are used as the basis for the generation of the model geometry, and the thermal images are the source of the surface temperature information that is projected onto the model. Mining/industrial facilities representations that are obtained can be used for inspection activities.

Keywords: aerial thermography, data processing, drone, low-cost, point cloud

Procedia PDF Downloads 139
7196 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 24
7195 Microvoid Growth in the Interfaces during Aging

Authors: Jae-Yong Park, Gwancheol Seo, Young-Ho Kim

Abstract:

Microvoids, sometimes called Kikendall voids, generally form in the interfaces between Sn-based solders and Cu and degrade the mechanical and electrical properties of the solder joints. The microvoid formation is known as the rapid interdiffusion between Sn and Cu and impurity content in the Cu. Cu electroplating from the acid solutions has been widely used by microelectronic packaging industry for both printed circuit board (PCB) and integrated circuit (IC) applications. The quality of electroplated Cu that can be optimized by the electroplating conditions is critical for the solder joint reliability. In this paper, the influence of electroplating conditions on the microvoid growth in the interfaces between Sn-3.0Ag-0.5Cu (SAC) solder and Cu layer was investigated during isothermal aging. The Cu layers were electroplated by controlling the additive of electroplating bath and current density to induce various microvoid densities. The electroplating bath consisted of sulfate, sulfuric acid, and additives and the current density of 5-15 mA/cm2 for each bath was used. After aging at 180 °C for up to 250 h, typical bi-layer of Cu6Sn5 and Cu3Sn intermetallic compounds (IMCs) was gradually growth at the SAC/Cu interface and microvoid density in the Cu3Sn showed disparities in the electroplating conditions. As the current density increased, the microvoid formation was accelerated in all electroplating baths. The higher current density induced, the higher impurity content in the electroplated Cu. When the polyethylene glycol (PEG) and Cl- ion were mixed in an electroplating bath, the microvoid formation was the highest compared to other electroplating baths. On the other hand, the overall IMC thickness was similar in all samples irrespective of the electroplating conditions. Impurity content in electroplated Cu influenced the microvoid growth, but the IMC growth was not affected by the impurity content. In conclusion, the electroplated conditions are properly optimized to avoid the excessive microvoid formation that results in brittle fracture of solder joint under high strain rate loading.

Keywords: electroplating, additive, microvoid, intermetallic compound

Procedia PDF Downloads 253
7194 Classifying and Predicting Efficiencies Using Interval DEA Grid Setting

Authors: Yiannis G. Smirlis

Abstract:

The classification and the prediction of efficiencies in Data Envelopment Analysis (DEA) is an important issue, especially in large scale problems or when new units frequently enter the under-assessment set. In this paper, we contribute to the subject by proposing a grid structure based on interval segmentations of the range of values for the inputs and outputs. Such intervals combined, define hyper-rectangles that partition the space of the problem. This structure, exploited by Interval DEA models and a dominance relation, acts as a DEA pre-processor, enabling the classification and prediction of efficiency scores, without applying any DEA models.

Keywords: data envelopment analysis, interval DEA, efficiency classification, efficiency prediction

Procedia PDF Downloads 163
7193 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 111
7192 Study Employed a Computer Model and Satellite Remote Sensing to Evaluate the Temporal and Spatial Distribution of Snow in the Western Hindu Kush Region of Afghanistan

Authors: Noori Shafiqullah

Abstract:

Millions of people reside downstream of river basins that heavily rely on snowmelt originating from the Hindu Kush (HK) region. Snowmelt plays a critical role as a primary water source in these areas. This study aimed to evaluate snowfall and snowmelt characteristics in the HK region across altitudes ranging from 2019m to 4533m. To achieve this, the study employed a combination of remote sensing techniques and the Snow Model (SM) to analyze the spatial and temporal distribution of Snow Water Equivalent (SWE). By integrating the simulated Snow-cover Area (SCA) with data from the Moderate Resolution Imaging Spectroradiometer (MODIS), the study optimized the Precipitation Gradient (PG), snowfall assessment, and the degree-day factor (DDF) for snowmelt distribution. Ground observed data from various elevations were used to calculate a temperature lapse rate of -7.0 (°C km-1). Consequently, the DDF value was determined as 3 (mm °C-1 d-1) for altitudes below 3000m and 3 to 4 (mm °C-1 d-1) for higher altitudes above 3000m. Moreover, the distribution of precipitation varies with elevation, with the PG being 0.001 (m-1) at lower elevations below 4000m and 0 (m-1) at higher elevations above 4000m. This study successfully utilized the SM to assess SCA and SWE by incorporating the two optimized parameters. The analysis of simulated SCA and MODIS data yielded coefficient determinations of R2, resulting in values of 0.95 and 0.97 for the years 2014-2015, 2015-2016, and 2016-2017, respectively. These results demonstrate that the SM is a valuable tool for managing water resources in mountainous watersheds such as the HK, where data scarcity poses a challenge."

Keywords: improved MODIS, experiment, snow water equivalent, snowmelt

Procedia PDF Downloads 64
7191 Hyper-Production of Lysine through Fermentation and Its Biological Evaluation on Broiler Chicks

Authors: Shagufta Gulraiz, Abu Saeed Hashmi, Muhammad Mohsin Javed

Abstract:

Lysine required for poultry feed is imported in Pakistan to fulfil the desired dietary needs. Present study was designed to produce maximum lysine by utilizing cheap sources to save the foreign exchange. To achieve the goal of lysine production through fermentation, large scale production of lysine was carried out in 7.5 L stirred glass vessel fermenter with wild and mutant Brevibacterium flavum (B. flavum) using all pre-optimized conditions. The identification of produced lysine was carried out by TLC and amino acid analyzer. Toxicity evaluation of produced lysine was performed before feeding to broiler chicks. During biological trial concentrated fermented broth having 8% lysine was used in poultry rations as a source of Lysine for test birds. Fermenter scale studies showed that the maximum lysine (20.8 g/L) was produced at 250 rpm, 1.5 vvm aeration, 6.0% inoculum under controlled pH conditions after 56 h of fermentation with wild culture but mutant (BFENU2) gave maximum yield of lysine 36.3 g/L under optimized condition after 48 h. Amino acid profiling showed 1.826% Lysine in fermented broth by wild B. flavum and 2.644% by mutant strain (BFENU2). Toxicity evaluation report showed that the produced lysine is safe for consumption by broilers. Biological evaluation results showed that produced lysine was equally good as commercial lysine in terms of weight gain, feed intake and feed conversion ratio. A cheap and practical bioprocess of Lysine production was concluded, that can be exploited commercially in Pakistan to save foreign exchange.

Keywords: lysine, fermentation, broiler chicks, biological evaluation

Procedia PDF Downloads 545
7190 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 72
7189 Optimization of the Administration of Intravenous Medication by Reduction of the Residual Volume, Taking User-Friendliness, Cost Efficiency, and Safety into Account

Authors: A. Poukens, I. Sluyts, A. Krings, J. Swartenbroekx, D. Geeroms, J. Poukens

Abstract:

Introduction and Objectives: It has been known for many years that with the administration of intravenous medication, a rather significant part of the planned to be administered infusion solution, the residual volume ( the volume that remains in the IV line and or infusion bag), does not reach the patient and is wasted. This could possibly result in under dosage and diminished therapeutic effect. Despite the important impact on the patient, the reduction of residual volume lacks attention. An optimized and clearly stated protocol concerning the reduction of residual volume in an IV line is necessary for each hospital. As described in my Master’s thesis, acquiring the degree of Master in Hospital Pharmacy, administration of intravenous medication can be optimized by reduction of the residual volume. Herewith effectiveness, user-friendliness, cost efficiency and safety were taken into account. Material and Methods: By usage of a literature study and an online questionnaire sent out to all Flemish hospitals and hospitals in the Netherlands (province Limburg), current flush methods could be mapped out. In laboratory research, possible flush methods aiming to reduce the residual volume were measured. Furthermore, a self-developed experimental method to reduce the residual volume was added to the study. The current flush methods and the self-developed experimental method were compared to each other based on cost efficiency, user-friendliness and safety. Results: There is a major difference between the Flemish and the hospitals in the Netherlands (Province Limburg) concerning the approach and method of flushing IV lines after administration of intravenous medication. The residual volumes were measured and laboratory research showed that if flushing was done minimally 1-time equivalent to the residual volume, 95 percent of glucose would be flushed through. Based on the comparison, it became clear that flushing by use of a pre-filled syringe would be the most cost-efficient, user-friendly and safest method. According to laboratory research, the self-developed experimental method is feasible and has the advantage that the remaining fraction of the medication can be administered to the patient in unchanged concentration without dilution. Furthermore, this technique can be applied regardless of the level of the residual volume. Conclusion and Recommendations: It is recommendable to revise the current infusion systems and flushing methods in most hospitals. Aside from education of the hospital staff and alignment on a uniform substantiated protocol, an optimized and clear policy on the reduction of residual volume is necessary for each hospital. It is recommended to flush all IV lines with rinsing fluid with at least the equivalent volume of the residual volume. Further laboratory and clinical research for the self-developed experimental method are needed before this method can be implemented clinically in a broader setting.

Keywords: intravenous medication, infusion therapy, IV flushing, residual volume

Procedia PDF Downloads 130
7188 Bayesian Flexibility Modelling of the Conditional Autoregressive Prior in a Disease Mapping Model

Authors: Davies Obaromi, Qin Yongsong, James Ndege, Azeez Adeboye, Akinwumi Odeyemi

Abstract:

The basic model usually used in disease mapping, is the Besag, York and Mollie (BYM) model and which combines the spatially structured and spatially unstructured priors as random effects. Bayesian Conditional Autoregressive (CAR) model is a disease mapping method that is commonly used for smoothening the relative risk of any disease as used in the Besag, York and Mollie (BYM) model. This model (CAR), which is also usually assigned as a prior to one of the spatial random effects in the BYM model, successfully uses information from adjacent sites to improve estimates for individual sites. To our knowledge, there are some unrealistic or counter-intuitive consequences on the posterior covariance matrix of the CAR prior for the spatial random effects. In the conventional BYM (Besag, York and Mollie) model, the spatially structured and the unstructured random components cannot be seen independently, and which challenges the prior definitions for the hyperparameters of the two random effects. Therefore, the main objective of this study is to construct and utilize an extended Bayesian spatial CAR model for studying tuberculosis patterns in the Eastern Cape Province of South Africa, and then compare for flexibility with some existing CAR models. The results of the study revealed the flexibility and robustness of this alternative extended CAR to the commonly used CAR models by comparison, using the deviance information criteria. The extended Bayesian spatial CAR model is proved to be a useful and robust tool for disease modeling and as a prior for the structured spatial random effects because of the inclusion of an extra hyperparameter.

Keywords: Besag2, CAR models, disease mapping, INLA, spatial models

Procedia PDF Downloads 273
7187 3D Simulation of Orthodontic Tooth Movement in the Presence of Horizontal Bone Loss

Authors: Azin Zargham, Gholamreza Rouhi, Allahyar Geramy

Abstract:

One of the most prevalent types of alveolar bone loss is horizontal bone loss (HBL) in which the bone height around teeth is reduced homogenously. In the presence of HBL the magnitudes of forces during orthodontic treatment should be altered according to the degree of HBL, in a way that without further bone loss, desired tooth movement can be obtained. In order to investigate the appropriate orthodontic force system in the presence of HBL, a three-dimensional numerical model capable of the simulation of orthodontic tooth movement was developed. The main goal of this research was to evaluate the effect of different degrees of HBL on a long-term orthodontic tooth movement. Moreover, the effect of different force magnitudes on orthodontic tooth movement in the presence of HBL was studied. Five three-dimensional finite element models of a maxillary lateral incisor with 0 mm, 1.5 mm, 3 mm, 4.5 mm and 6 mm of HBL were constructed. The long-term orthodontic tooth tipping movements were attained during a 4-weeks period in an iterative process through the external remodeling of the alveolar bone based on strains in periodontal ligament as the bone remodeling mechanical stimulus. To obtain long-term orthodontic tooth movement in each iteration, first the strains in periodontal ligament under a 1-N tipping force were calculated using finite element analysis. Then, bone remodeling and the subsequent tooth movement were computed in a post-processing software using a custom written program. Incisal edge, cervical, and apical area displacement in the models with different alveolar bone heights (0, 1.5, 3, 4.5, 6 mm bone loss) in response to a 1-N tipping force were calculated. Maximum tooth displacement was found to be 2.65 mm at the top of the crown of the model with a 6 mm bone loss. Minimum tooth displacement was 0.45 mm at the cervical level of the model with a normal bone support. Tooth tipping degrees of models in response to different tipping force magnitudes were also calculated for models with different degrees of HBL. Degrees of tipping tooth movement increased as force level was increased. This increase was more prominent in the models with smaller degrees of HBL. By using finite element method and bone remodeling theories, this study indicated that in the presence of HBL, under the same load, long-term orthodontic tooth movement will increase. The simulation also revealed that even though tooth movement increases with increasing the force, this increase was only prominent in the models with smaller degrees of HBL, and tooth models with greater degrees of HBL will be less affected by the magnitude of an orthodontic force. Based on our results, the applied force magnitude must be reduced in proportion of degree of HBL.

Keywords: bone remodeling, finite element method, horizontal bone loss, orthodontic tooth movement.

Procedia PDF Downloads 339
7186 Testing for Endogeneity of Foreign Direct Investment: Implications for Economic Policy

Authors: Liwiusz Wojciechowski

Abstract:

Research background: The current knowledge does not give a clear answer to the question of the impact of FDI on productivity. Results of the empirical studies are still inconclusive, no matter how extensive and diverse in terms of research approaches or groups of countries analyzed they are. It should also take into account the possibility that FDI and productivity are linked and that there is a bidirectional relationship between them. This issue is particularly important because on one hand FDI can contribute to changes in productivity in the host country, but on the other hand its level and dynamics may imply that FDI should be undertaken in a given country. As already mentioned, a two-way relationship between the presence of foreign capital and productivity in the host country should be assumed, taking into consideration the endogenous nature of FDI. Purpose of the article: The overall objective of this study is to determine the causality between foreign direct investment and total factor productivity in host county in terms of different relative absorptive capacity across countries. In the classic sense causality among variables is not always obvious and requires for testing, which would facilitate proper specification of FDI models. The aim of this article is to study endogeneity of selected macroeconomic variables commonly being used in FDI models in case of Visegrad countries: main recipients of FDI in CEE. The findings may be helpful in determining the structure of the actual relationship between variables, in appropriate models estimation and in forecasting as well as economic policymaking. Methodology/methods: Panel and time-series data techniques including GMM estimator, VEC models and causality tests were utilized in this study. Findings & Value added: The obtained results allow to confirm the hypothesis states the bi-directional causality between FDI and total factor productivity. Although results differ from among countries and data level of aggregation implications may be useful for policymakers in case of providing foreign capital attracting policy.

Keywords: endogeneity, foreign direct investment, multi-equation models, total factor productivity

Procedia PDF Downloads 195
7185 Estimating the Probability of Winning the Best Actor/Actress Award Conditional on the Best Picture Nomination with Bayesian Hierarchical Models

Authors: Svetlana K. Eden

Abstract:

Movies and TV shows have long become part of modern culture. We all have our preferred genre, story, actors, and actresses. However, can we objectively discern good acting from the bad? As laymen, we are probably not objective, but what about the Oscar academy members? Are their votes based on objective measures? Oscar academy members are probably also biased due to many factors, including their professional affiliations or advertisement exposure. Heavily advertised films bring more publicity to their cast and are likely to have bigger budgets. Because a bigger budget may also help earn a Best Picture (BP) nomination, we hypothesize that best actor/actress (BA) nominees from BP-nominated movies would have higher chances of winning the award than those BA nominees from non-BP-nominated films. To test this hypothesis, three Bayesian hierarchical models are proposed, and their performance is evaluated. The results from all three models largely support our hypothesis. Depending on the proportion of BP nominations among BA nominees, the odds ratios (estimated over expected) of winning the BA award conditional on BP nomination vary from 2.8 [0.8-7.0] to 4.3 [2.0, 15.8] for actors and from 1.5 [0.0, 12.2] to 5.4 [2.7, 14.2] for actresses.

Keywords: Oscar, best picture, best actor/actress, bias

Procedia PDF Downloads 216
7184 The Confounding Role of Graft-versus-Host Disease in Animal Models of Cancer Immunotherapy: A Systematic Review

Authors: Hami Ashraf, Mohammad Heydarnejad

Abstract:

Introduction: The landscape of cancer treatment has been revolutionized by immunotherapy, offering novel therapeutic avenues for diverse cancer types. Animal models play a pivotal role in the development and elucidation of these therapeutic modalities. Nevertheless, the manifestation of Graft-versus-Host Disease (GVHD) in such models poses significant challenges, muddling the interpretation of experimental data within the ambit of cancer immunotherapy. This study is dedicated to scrutinizing the role of GVHD as a confounding factor in animal models used for cancer immunotherapy, alongside proposing viable strategies to mitigate this complication. Method: Employing a systematic review framework, this study undertakes a comprehensive literature survey including academic journals in PubMed, Embase, and Web of Science databases and conference proceedings to collate pertinent research that delves into the impact of GVHD on animal models in cancer immunotherapy. The acquired studies undergo rigorous analysis and synthesis, aiming to assess the influence of GVHD on experimental results while identifying strategies to alleviate its confounding effects. Results: Findings indicate that GVHD incidence significantly skews the reliability and applicability of experimental outcomes, occasionally leading to erroneous interpretations. The literature surveyed also sheds light on various methodologies under exploration to counteract the GVHD dilemma, thereby bolstering the experimental integrity in this domain. Conclusion: GVHD's presence critically affects both the interpretation and validity of experimental findings, underscoring the imperative for strategies to curtail its confounding impacts. Current research endeavors are oriented towards devising solutions to this issue, aiming to augment the dependability and pertinence of experimental results. It is incumbent upon researchers to diligently consider and adjust for GVHD's effects, thereby enhancing the translational potential of animal model findings to clinical applications and propelling progress in the arena of cancer immunotherapy.

Keywords: graft-versus-host disease, cancer immunotherapy, animal models, preclinical model

Procedia PDF Downloads 49
7183 Computational Fluid Dynamics Modelling of the Improved Airflow on a Ballistic Grille Using a Porous Medium Approach

Authors: Mapula Mothomogolo, Anria Clarke

Abstract:

Ballistic grilles are adopted on military vehicles to mitigate the vulnerability of the radiator. The design of ballistic grilles needs to address conflicting requirements: shielding the surface area of the radiator from incoming projectile threats yet providing sufficient airflow through the radiator to yield adequate heat rejection. These conflicting requirements result in a unique and challenging design problem. In this paper, the airflow through a ballistic grille using a computational modelling approach is investigated. A comparative study was conducted between a standard grille and a ballistic grille of a military vehicle. The results were used as a benchmark study for optimizing the ballistic grille with pressure drop selected as the parameter for optimization. The grilles were modelled as a porous medium to account for the pressure drop in the porous region. The effects of the porous zone were accounted for in the source term of the momentum Navier Stokes equations. The source term defines the pressure drop in the porous region as a function of the velocity. A pressure drop curve approach was used to determine the Darcy coefficient and inertial resistance coefficients of the source terms. The empirically defined coefficients were used as simulation input for a more accurate pressure drop prediction in the porous region. Additionally, the ballistic grille was optimized using an adjoint solver (shape optimization module in Ansys Fluent) to reduce the pressure drop through the ballistic grille by 30%. Based on the simulation results, the optimized ballistic grille geometry needs to be experimentally tested to validate the numerical simulation data.

Keywords: ballistic grille, darcy coefficient, optimization, porous medium

Procedia PDF Downloads 27
7182 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model

Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi

Abstract:

Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.

Keywords: flight control clearance, LFR, stability analysis, robustness analysis

Procedia PDF Downloads 348
7181 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 103
7180 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework

Authors: Nicola Rubino

Abstract:

This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.

Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points

Procedia PDF Downloads 275
7179 Hybrid Equity Warrants Pricing Formulation under Stochastic Dynamics

Authors: Teh Raihana Nazirah Roslan, Siti Zulaiha Ibrahim, Sharmila Karim

Abstract:

A warrant is a financial contract that confers the right but not the obligation, to buy or sell a security at a certain price before expiration. The standard procedure to value equity warrants using call option pricing models such as the Black–Scholes model had been proven to contain many flaws, such as the assumption of constant interest rate and constant volatility. In fact, existing alternative models were found focusing more on demonstrating techniques for pricing, rather than empirical testing. Therefore, a mathematical model for pricing and analyzing equity warrants which comprises stochastic interest rate and stochastic volatility is essential to incorporate the dynamic relationships between the identified variables and illustrate the real market. Here, the aim is to develop dynamic pricing formulations for hybrid equity warrants by incorporating stochastic interest rates from the Cox-Ingersoll-Ross (CIR) model, along with stochastic volatility from the Heston model. The development of the model involves the derivations of stochastic differential equations that govern the model dynamics. The resulting equations which involve Cauchy problem and heat equations are then solved using partial differential equation approaches. The analytical pricing formulas obtained in this study comply with the form of analytical expressions embedded in the Black-Scholes model and other existing pricing models for equity warrants. This facilitates the practicality of this proposed formula for comparison purposes and further empirical study.

Keywords: Cox-Ingersoll-Ross model, equity warrants, Heston model, hybrid models, stochastic

Procedia PDF Downloads 126
7178 Impact of the Hayne Royal Commission on the Operating Model of Australian Financial Advice Firms

Authors: Mohammad Abu-Taleb

Abstract:

The final report of the Royal Commission into Australian financial services misconduct, released in February 2019, has had a significant impact on the financial advice industry. The recommendations released in the Commissioner’s final report include changes to ongoing fee arrangements, a new disciplinary system for financial advisers, and mandatory reporting of compliance concerns. This thesis aims to explore the impact of the Royal Commission’s recommendations on the operating model of financial advice firms in terms of advice products, processes, delivery models, and customer segments. Also, this research seeks to investigate whether the Royal Commission’s outcome has accelerated the use of enhanced technology solutions within the operating model of financial advice firms. And to identify the key challenges confronting financial advice firms whilst implementing the Commissioner’s recommendations across their operating models. In order to achieve the objectives of this thesis, a qualitative research design has been adopted through semi-structured in-depth interviews with 24 financial advisers and managers who are engaged in the operation of financial advice services. The study used the thematic analysis approach to interpret the qualitative data collected from the interviews. The findings of this thesis reveal that customer-centric operating models will become more prominent across the financial advice industry in response to the Commissioner’s final report. And the Royal Commission’s outcome has accelerated the use of advice technology solutions within the operating model of financial advice firms. In addition, financial advice firms have started more than before using simpler and more automated web-based advice services, which enable financial advisers to provide simple advice in a greater scale, and also to accelerate the use of robo-advice models and digital delivery to mass customers in the long term. Furthermore, the study identifies process and technology changes as, long with technical and interpersonal skills development, as the key challenges encountered financial advice firms whilst implementing the Commissioner’s recommendations across their operating models.

Keywords: hayne royal commission, financial planning advice, operating model, advice products, advice processes, delivery models, customer segments, digital advice solutions

Procedia PDF Downloads 84
7177 Spectroscopic Study of Tb³⁺ Doped Calcium Aluminozincate Phosphor for Display and Solid-State Lighting Applications

Authors: Sumandeep Kaur, Allam Srinivasa Rao, Mula Jayasimhadri

Abstract:

In recent years, rare earth (RE) ions doped inorganic luminescent materials are seeking great attention due to their excellent physical and chemical properties. These materials offer high thermal and chemical stability and exhibit good luminescence properties due to the presence of RE ions. The luminescent properties of these materials are attributed to their intra-configurational f-f transitions in RE ions. A series of Tb³⁺ doped calcium aluminozincate has been synthesized via sol-gel method. The structural and morphological studies have been carried out by recording X-ray diffraction patterns and SEM image. The luminescent spectra have been recorded for a comprehensive study of their luminescence properties. The XRD profile reveals the single-phase orthorhombic crystal structure with an average crystallite size of 65 nm as calculated by using DebyeScherrer equation. The SEM image exhibits completely random, irregular morphology of micron size particles of the prepared samples. The optimization of luminescence has been carried out by varying the dopant Tb³⁺ concentration within the range from 0.5 to 2.0 mol%. The as-synthesized phosphors exhibit intense emission at 544 nm pumped at 478 nm excitation wavelength. The optimized Tb³⁺ concentration has been found to be 1.0 mol% in the present host lattice. The decay curves show bi-exponential fitting for the as-synthesized phosphor. The colorimetric studies show green emission with CIE coordinates (0.334, 0.647) lying in green region for the optimized Tb³⁺ concentration. This report reveals the potential utility of Tb³⁺ doped calcium aluminozincate phosphors for display and solid-state lighting devices.

Keywords: concentration quenching, phosphor, photoluminescence, XRD

Procedia PDF Downloads 148
7176 Cognitive Models of Future in Political Texts

Authors: Solopova Olga

Abstract:

The present paper briefly recalls theoretical preconditions for investigating cognitive-discursive models of future in political discourse. The author reviews theories and methods used for strengthening a future focus in this discourse working out two main tools – a model of future and a metaphorical scenario. The paper examines the implications of metaphorical analogies for modeling future in mass media. It argues that metaphor is not merely a rhetorical ornament in the political discourse of media regulation but a conceptual model that legislates and regulates our understanding of future.

Keywords: cognitive approach, future research, political discourse, model, scenario, metaphor

Procedia PDF Downloads 388
7175 Applications of Nonlinear Models to Measure and Predict Thermo Physical Properties of Binary Liquid Mixtures1, 4 Dioxane with Bromo Benzene at Various Temperatures

Authors: R. Ramesh, M. Y. M. Yunus, K. Ramesh

Abstract:

The study conducted in this research are Viscosities, η, and Densities ,ρ, of 1, 4-dioxane with Bromobenzene at different mole fractions and various temperatures in the atmospheric pressure condition. From experimentations excess volumes, VE, and deviations in viscosities, Δη, of mixtures at infinite dilutions have been obtained. The measured systems exhibited positive values of VmE and negative values of Δη. The binary mixture 1, 4 dioxane + Bromobenzene show positive VE and negative Δη with increasing temperatures. The outcomes clearly indicate that weak interactions present in mixture. It is mainly because of number and position of methyl groups exist in these aromatic hydrocarbons. These measured data tailored to the nonlinear models to derive the binary coefficients. Standard deviations have been considered between the fitted outcomes and the calculated data is helpful deliberate mixing behavior of the binary mixtures. It can conclude that in our cases, the data found with the values correlated by the corresponding models very well. The molecular interactions existing between the components and comparison of liquid mixtures were also discussed.

Keywords: 1, 4 dioxane, bromobenzene, density, excess molar volume

Procedia PDF Downloads 408
7174 Bianchi Type- I Viscous Fluid Cosmological Models with Stiff Matter and Time Dependent Λ- Term

Authors: Rajendra Kumar Dubey

Abstract:

Einstein’s field equations with variable cosmological term Λ are considered in the presence of viscous fluid for Bianchi type I space time. Exact solutions of Einstein’s field equations are obtained by assuming cosmological term Λ Proportional to (R is a scale factor and m is constant). We observed that the shear viscosity is found to be responsible for faster removal of initial anisotropy in the universe. The physical significance of the cosmological models has also been discussed.

Keywords: bianchi type, I cosmological model, viscous fluid, cosmological constant Λ

Procedia PDF Downloads 525
7173 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features

Authors: Bo Wang

Abstract:

The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.

Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection

Procedia PDF Downloads 282
7172 Attribute Selection for Preference Functions in Engineering Design

Authors: Ali E. Abbas

Abstract:

Industrial Engineering is a broad multidisciplinary field with intersections and applications in numerous areas. When designing a product, it is important to determine the appropriate attributes of value and the preference function for which the product is optimized. This paper provides some guidelines on appropriate selection of attributes for preference and value functions for engineering design.

Keywords: decision analysis, industrial engineering, direct vs. indirect values, engineering management

Procedia PDF Downloads 299
7171 A Comparative Analysis of Geometric and Exponential Laws in Modelling the Distribution of the Duration of Daily Precipitation

Authors: Mounia El Hafyani, Khalid El Himdi

Abstract:

Precipitation is one of the key variables in water resource planning. The importance of modeling wet and dry durations is a crucial pointer in engineering hydrology. The objective of this study is to model and analyze the distribution of wet and dry durations. For this purpose, the daily rainfall data from 1967 to 2017 of the Moroccan city of Kenitra’s station are used. Three models are implemented for the distribution of wet and dry durations, namely the first-order Markov chain, the second-order Markov chain, and the truncated negative binomial law. The adherence of the data to the proposed models is evaluated using Chi-square and Kolmogorov-Smirnov tests. The Akaike information criterion is applied to assess the most effective model distribution. We go further and study the law of the number of wet and dry days among k consecutive days. The calculation of this law is done through an algorithm that we have implemented based on conditional laws. We complete our work by comparing the observed moments of the numbers of wet/dry days among k consecutive days to the calculated moment of the three estimated models. The study shows the effectiveness of our approach in modeling wet and dry durations of daily precipitation.

Keywords: Markov chain, rainfall, truncated negative binomial law, wet and dry durations

Procedia PDF Downloads 120