Search results for: Process models
19277 Data Collection with Bounded-Sized Messages in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In this paper, we study the data collection problem in Wireless Sensor Networks (WSNs) adopting the two interference models: The graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR). The main issue of the problem is to compute schedules with the minimum number of timeslots, that is, to compute the minimum latency schedules, such that data from every node can be collected without any collision or interference to a sink node. While existing works studied the problem with unit-sized and unbounded-sized message models, we investigate the problem with the bounded-sized message model, and introduce a constant factor approximation algorithm. To the best known of our knowledge, our result is the first result of the data collection problem with bounded-sized model in both interference models.Keywords: data collection, collision-free, interference-free, physical interference model, SINR, approximation, bounded-sized message model, wireless sensor networks
Procedia PDF Downloads 22119276 Switched System Diagnosis Based on Intelligent State Filtering with Unknown Models
Authors: Nada Slimane, Foued Theljani, Faouzi Bouani
Abstract:
The paper addresses the problem of fault diagnosis for systems operating in several modes (normal or faulty) based on states assessment. We use, for this purpose, a methodology consisting of three main processes: 1) sequential data clustering, 2) linear model regression and 3) state filtering. Typically, Kalman Filter (KF) is an algorithm that provides estimation of unknown states using a sequence of I/O measurements. Inevitably, although it is an efficient technique for state estimation, it presents two main weaknesses. First, it merely predicts states without being able to isolate/classify them according to their different operating modes, whether normal or faulty modes. To deal with this dilemma, the KF is endowed with an extra clustering step based fully on sequential version of the k-means algorithm. Second, to provide state estimation, KF requires state space models, which can be unknown. A linear regularized regression is used to identify the required models. To prove its effectiveness, the proposed approach is assessed on a simulated benchmark.Keywords: clustering, diagnosis, Kalman Filtering, k-means, regularized regression
Procedia PDF Downloads 18219275 Towards an Understanding of Breaking and Coalescence Process in Bitumen Emulsions
Authors: Abdullah Khan, Per Redelius, Nicole Kringos
Abstract:
The breaking and coalescence process in bitumen emulsion strongly influence the performance of the cold mix asphalt (CMA) and this phase separation process is affected by the physio-chemical changes happening at the bitumen/water interface. In this paper, coalescence experiments of two bitumen droplets in an emulsion environment have been carried out by a newly developed test procedure. In this study, different types of emulsifiers were selected to understand the coalescence process with respect to changes in the water phase surface tension due to addition of different surfactants and other additives such as salts. The research showed that the relaxation kinetics of bitumen droplets varied with the type of emulsifier, its concentration as well as with and without presence of salt in the water phase. Moreover, kinetics of the coalescence process was also investigated with the temperature variation.Keywords: bitumen emulsions, breaking and coalescence, cold mix asphalt, emulsifiers, relaxation, salts
Procedia PDF Downloads 33819274 Compare Hot Forming and Cold Forming in Rolling Process
Authors: Ali Moarrefzadeh
Abstract:
In metalworking, rolling is a metal forming process in which metal stock is passed through a pair of rolls. Rolling is classified according to the temperature of the metal rolled. If the temperature of the metal is above its recrystallization temperature, then the process is termed as hot rolling. If the temperature of the metal is below its recrystallization temperature, the process is termed as cold rolling. In terms of usage, hot rolling processes more tonnage than any other manufacturing process, and cold rolling processes the most tonnage out of all cold working processes. This article describes the use of advanced tubing inspection NDT methods for boiler and heat exchanger equipment in the petrochemical industry to supplement major turnaround inspections. The methods presented include remote field eddy current, magnetic flux leakage, internal rotary inspection system and eddy current.Keywords: hot forming, cold forming, metal, rolling, simulation
Procedia PDF Downloads 52919273 Application Methodology for the Generation of 3D Thermal Models Using UAV Photogrammety and Dual Sensors for Mining/Industrial Facilities Inspection
Authors: Javier Sedano-Cibrián, Julio Manuel de Luis-Ruiz, Rubén Pérez-Álvarez, Raúl Pereda-García, Beatriz Malagón-Picón
Abstract:
Structural inspection activities are necessary to ensure the correct functioning of infrastructures. Unmanned Aerial Vehicle (UAV) techniques have become more popular than traditional techniques. Specifically, UAV Photogrammetry allows time and cost savings. The development of this technology has permitted the use of low-cost thermal sensors in UAVs. The representation of 3D thermal models with this type of equipment is in continuous evolution. The direct processing of thermal images usually leads to errors and inaccurate results. A methodology is proposed for the generation of 3D thermal models using dual sensors, which involves the application of visible Red-Blue-Green (RGB) and thermal images in parallel. Hence, the RGB images are used as the basis for the generation of the model geometry, and the thermal images are the source of the surface temperature information that is projected onto the model. Mining/industrial facilities representations that are obtained can be used for inspection activities.Keywords: aerial thermography, data processing, drone, low-cost, point cloud
Procedia PDF Downloads 14319272 Classifying and Predicting Efficiencies Using Interval DEA Grid Setting
Authors: Yiannis G. Smirlis
Abstract:
The classification and the prediction of efficiencies in Data Envelopment Analysis (DEA) is an important issue, especially in large scale problems or when new units frequently enter the under-assessment set. In this paper, we contribute to the subject by proposing a grid structure based on interval segmentations of the range of values for the inputs and outputs. Such intervals combined, define hyper-rectangles that partition the space of the problem. This structure, exploited by Interval DEA models and a dominance relation, acts as a DEA pre-processor, enabling the classification and prediction of efficiency scores, without applying any DEA models.Keywords: data envelopment analysis, interval DEA, efficiency classification, efficiency prediction
Procedia PDF Downloads 16419271 A Conceptual Design of Freeze Desalination Using Low Cost Refrigeration
Authors: Parul Sahu
Abstract:
In recent years, seawater desalination has been emerged as a potential resource to circumvent water scarcity, especially in coastal regions. Among the various methods, thermal evaporation or distillation and membrane operations like Reverse Osmosis (RO) has been exploited at commercial scale. However, the energy cost and maintenance expenses associated with these processes remain high. In this context Freeze Desalination (FD), subjected to the availability of low cost refrigeration, offers an exciting alternative. Liquefied Natural Gas (LNG) regasification terminals provide an opportunity to utilize the refrigeration available with regasification of LNG. This work presents the conceptualization and development of a process scheme integrating the ice and hydrate based FD to the LNG regasification process. This integration overcomes the high energy demand associated with FD processes by utilizing the refrigeration associated with LNG regasification. An optimal process scheme was obtained by performing process simulation using ASPEN PLUS simulator. The results indicated the new proposed process requires only 1 kWh/m³ of energy with the utilization of maximum refrigeration. In addition, a sensitivity analysis was also performed to study the effect of various process parameters on water recovery and energy consumption for the proposed process. The results show that the energy consumption decreases by 30% with an increase in water recovery from 30% to 60%. However, due to operational limitations associated with ice and hydrate handling in seawater, the water recovery cannot be maximized but optimized. The proposed process can be potentially used to desalinate seawater in integration with LNG regasification terminal.Keywords: freeze desalination, liquefied natural gas regasification, process simulation, refrigeration
Procedia PDF Downloads 13119270 Optimizing Machine Learning Through Python Based Image Processing Techniques
Authors: Srinidhi. A, Naveed Ahmed, Twinkle Hareendran, Vriksha Prakash
Abstract:
This work reviews some of the advanced image processing techniques for deep learning applications. Object detection by template matching, image denoising, edge detection, and super-resolution modelling are but a few of the tasks. The paper looks in into great detail, given that such tasks are crucial preprocessing steps that increase the quality and usability of image datasets in subsequent deep learning tasks. We review some of the methods for the assessment of image quality, more specifically sharpness, which is crucial to ensure a robust performance of models. Further, we will discuss the development of deep learning models specific to facial emotion detection, age classification, and gender classification, which essentially includes the preprocessing techniques interrelated with model performance. Conclusions from this study pinpoint the best practices in the preparation of image datasets, targeting the best trade-off between computational efficiency and retaining important image features critical for effective training of deep learning models.Keywords: image processing, machine learning applications, template matching, emotion detection
Procedia PDF Downloads 1319269 Centralizing the Teaching Process in Intelligent Tutoring System Architectures
Authors: Nikolaj Troels Graf Von Malotky, Robin Nicolay, Alke Martens
Abstract:
There exist a plethora of architectures for ITSs (Intelligent Tutoring Systems). A thorough analysis and comparison of the architectures revealed, that in most cases the architecture extensions are evolutionary grown, reflecting state of the art trends of each decade. However, from the perspective of software engineering, the main aspect of an ITS has not been reflected in any of these architectures, yet. From the perspective of cognitive research, the construction of the teaching process is what makes an ITS 'intelligent' regarding the spectrum of interaction with the students. Thus, in our approach, we focus on a behavior based architecture, which is based on the main teaching processes. To create a new general architecture for ITS, we have to define the prerequisites. This paper analyzes the current state of the existing architectures and derives rules for the behavior of ITS. It is presenting a teaching process for ITSs to be used together with the architecture.Keywords: intelligent tutoring, ITS, tutoring process, system architecture, interaction process
Procedia PDF Downloads 38419268 Welding Process Selection for Storage Tank by Integrated Data Envelopment Analysis and Fuzzy Credibility Constrained Programming Approach
Authors: Rahmad Wisnu Wardana, Eakachai Warinsiriruk, Sutep Joy-A-Ka
Abstract:
Selecting the most suitable welding process usually depends on experiences or common application in similar companies. However, this approach generally ignores many criteria that can be affecting the suitable welding process selection. Therefore, knowledge automation through knowledge-based systems will significantly improve the decision-making process. The aims of this research propose integrated data envelopment analysis (DEA) and fuzzy credibility constrained programming approach for identifying the best welding process for stainless steel storage tank in the food and beverage industry. The proposed approach uses fuzzy concept and credibility measure to deal with uncertain data from experts' judgment. Furthermore, 12 parameters are used to determine the most appropriate welding processes among six competitive welding processes.Keywords: welding process selection, data envelopment analysis, fuzzy credibility constrained programming, storage tank
Procedia PDF Downloads 16719267 Value in Exchange: The Importance of Users Interaction as the Center of User Experiences
Authors: Ramlan Jantan, Norfadilah Kamaruddin, Shahriman Zainal Abidin
Abstract:
In this era of technology, the co-creation method has become a new development trend. In this light, most design businesses have currently transformed their development strategy from being goods-dominant into service-dominant where more attention is given to the end-users and their roles in the development process. As a result, the conventional development process has been replaced with a more cooperative one. Consequently, numerous studies have been conducted to explore the extension of co-creation method in the design development process and most studies have focused on issues found during the production process. In the meantime, this study aims to investigate potential values established during the pre-production process, which is also known as the ‘circumstances value creation’. User involvement is questioned and crucially debate at the entry level of pre-production process in value in-exchange jointly spheres; thus user experiences took place. Thus, this paper proposed a potential framework of the co-creation method for Malaysian interactive product development. The framework is formulated from both parties involved: the users and designers. The framework will clearly give an explanation of the value of the co-creation method, and it could assist relevant design industries/companies in developing a blueprint for the design process. This paper further contributes to the literature on the co-creation of value and digital ecosystems.Keywords: co-creation method, co-creation framework, co-creation, co-production
Procedia PDF Downloads 17819266 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark
Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos
Abstract:
This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark
Procedia PDF Downloads 12019265 Tip60 Histone Acetyltransferase Activators as Neuroepigenetic Therapeutic Modulators for Alzheimer’s Disease
Authors: Akanksha Bhatnagar, Sandhya Kortegare, Felice Elefant
Abstract:
Context: Alzheimer's disease (AD) is a neurodegenerative disorder that is characterized by progressive cognitive decline and memory loss. The cause of AD is not fully understood, but it is thought to be caused by a combination of genetic, environmental, and lifestyle factors. One of the hallmarks of AD is the loss of neurons in the hippocampus, a brain region that is important for memory and learning. This loss of neurons is thought to be caused by a decrease in histone acetylation, which is a process that regulates gene expression. Research Aim: The research aim of the study was to develop mall molecule compounds that can enhance the activity of Tip60, a histone acetyltransferase that is important for memory and learning. Methodology/Analysis: The researchers used in silico structural modeling and a pharmacophore-based virtual screening approach to design and synthesize small molecule compounds strongly predicted to target and enhance Tip60’s HAT activity. The compounds were then tested in vitro and in vivo to assess their ability to enhance Tip60 activity and rescue cognitive deficits in AD models. Findings: The researchers found that several of the compounds were able to enhance Tip60 activity and rescue cognitive deficits in AD models. The compounds were also developed to cross the blood-brain barrier, which is an important factor for the development of potential AD therapeutics. Theoretical Importance: The findings of this study suggest that Tip60 HAT activators have the potential to be developed as therapeutic agents for AD. The compounds are specific to Tip60, which suggests that they may have fewer side effects than other HDAC inhibitors. Additionally, the compounds are able to cross the blood-brain barrier, which is a major hurdle for the development of AD therapeutics. Data Collection: The study collected data from a variety of sources, including in vitro assays and animal models. The in vitro assays assessed the ability of compounds to enhance Tip60 activity using histone acetyltransferase (HAT) enzyme assays and chromatin immunoprecipitation assays. Animal models were used to assess the ability of the compounds to rescue cognitive deficits in AD models using a variety of behavioral tests, including locomotor ability, sensory learning, and recognition tasks. The human clinical trials will be used to assess the safety and efficacy of the compounds in humans. Questions: The question addressed by this study was whether Tip60 HAT activators could be developed as therapeutic agents for AD. Conclusions: The findings of this study suggest that Tip60 HAT activators have the potential to be developed as therapeutic agents for AD. The compounds are specific to Tip60, which suggests that they may have fewer side effects than other HDAC inhibitors. Additionally, the compounds are able to cross the blood-brain barrier, which is a major hurdle for the development of AD therapeutics. Further research is needed to confirm the safety and efficacy of these compounds in humans.Keywords: Alzheimer's disease, cognition, neuroepigenetics, drug discovery
Procedia PDF Downloads 7519264 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 7519263 Reduce, Reuse and Recycle: Grand Challenges in Construction Recovery Process
Authors: Abioye A. Oyenuga, Rao Bhamidiarri
Abstract:
Hurling a successful Construction and Demolition Waste (C&DW) recycling operation around the globe is a challenge today, predominantly because secondary materials markets are yet to be integrated. Reducing, Reusing and recycling of (C&DW) have been employed over the years, and various techniques have been investigated. However, the economic and environmental viability of its application seems limited. This paper discusses the costs and benefits in using secondary materials and focus on investigating reuse and recycling process for five major types of construction materials: concrete, metal, wood, cardboard/paper, and plasterboard. Data obtained from demolition specialist and contractors are considered and evaluated. With the date source, the research paper found that construction material recovery process fully incorporate the 3R’s process and shows how energy recovery by means of 3R's principles can be evaluated. This scrutiny leads to the empathy of grand challenges in construction material recovery process. Recommendations to deepen material recovery process are also discussed.Keywords: construction and demolition waste (C&DW), 3R concept, recycling, reuse, waste management, UK
Procedia PDF Downloads 42819262 Bayesian Flexibility Modelling of the Conditional Autoregressive Prior in a Disease Mapping Model
Authors: Davies Obaromi, Qin Yongsong, James Ndege, Azeez Adeboye, Akinwumi Odeyemi
Abstract:
The basic model usually used in disease mapping, is the Besag, York and Mollie (BYM) model and which combines the spatially structured and spatially unstructured priors as random effects. Bayesian Conditional Autoregressive (CAR) model is a disease mapping method that is commonly used for smoothening the relative risk of any disease as used in the Besag, York and Mollie (BYM) model. This model (CAR), which is also usually assigned as a prior to one of the spatial random effects in the BYM model, successfully uses information from adjacent sites to improve estimates for individual sites. To our knowledge, there are some unrealistic or counter-intuitive consequences on the posterior covariance matrix of the CAR prior for the spatial random effects. In the conventional BYM (Besag, York and Mollie) model, the spatially structured and the unstructured random components cannot be seen independently, and which challenges the prior definitions for the hyperparameters of the two random effects. Therefore, the main objective of this study is to construct and utilize an extended Bayesian spatial CAR model for studying tuberculosis patterns in the Eastern Cape Province of South Africa, and then compare for flexibility with some existing CAR models. The results of the study revealed the flexibility and robustness of this alternative extended CAR to the commonly used CAR models by comparison, using the deviance information criteria. The extended Bayesian spatial CAR model is proved to be a useful and robust tool for disease modeling and as a prior for the structured spatial random effects because of the inclusion of an extra hyperparameter.Keywords: Besag2, CAR models, disease mapping, INLA, spatial models
Procedia PDF Downloads 27919261 The Impact of Iso 9001 Certification on Brazilian Firms’ Performance: Insights from Multiple Case Studies
Authors: Matheus Borges Carneiro, Fabiane Leticia Lizarelli, José Carlos De Toledo
Abstract:
The evolution of quality management by companies was strongly enabled by, among others, ISO 9001 certification, which is considered a crucial requirement for several customers. Likewise, performance measurement provides useful insights for companies to identify the reflection of their decision-making process on their improvement. One of the most used performance measurement models is the balanced scorecard (BSC), which uses four perspectives to address a firm’s performance: financial, internal process, customer satisfaction, and learning and growth. Studies related to ISO 9001 and business performance have mostly adopted a quantitative approach to identify the standard’s causal effect on a firm’s performance. However, to verify how this influence may occur, an in-depth analysis within a qualitative approach is required. Therefore, this paper aims to verify the impact of ISO 9001:2015 on Brazilian firms’ performance based on the balanced scorecard perspective. Hence, nine certified companies located in the Southeast region of Brazil were studied through a multiple case study approach. Within this study, it was possible to identify the positive impact of ISO 9001 on firms’ overall performance, and four Critical Success Factors (CSFs) were identified as relevant on the linkage among ISO 9001 and firms’ performance: employee involvement, top management, process management, and customer focus. Due to the COVID-19 pandemic, the number of interviews was limited to the quality manager specialist, and the sample was limited since several companies were closed during the period of the study. This study presents an in-depth analysis of how the relationship between ISO 9001 certification and firms’ performance in a developing country is.Keywords: balanced scorecard, Brazilian firms’ performance, critical success factors, ISO 9001 certification, performance measurement
Procedia PDF Downloads 19819260 Testing for Endogeneity of Foreign Direct Investment: Implications for Economic Policy
Authors: Liwiusz Wojciechowski
Abstract:
Research background: The current knowledge does not give a clear answer to the question of the impact of FDI on productivity. Results of the empirical studies are still inconclusive, no matter how extensive and diverse in terms of research approaches or groups of countries analyzed they are. It should also take into account the possibility that FDI and productivity are linked and that there is a bidirectional relationship between them. This issue is particularly important because on one hand FDI can contribute to changes in productivity in the host country, but on the other hand its level and dynamics may imply that FDI should be undertaken in a given country. As already mentioned, a two-way relationship between the presence of foreign capital and productivity in the host country should be assumed, taking into consideration the endogenous nature of FDI. Purpose of the article: The overall objective of this study is to determine the causality between foreign direct investment and total factor productivity in host county in terms of different relative absorptive capacity across countries. In the classic sense causality among variables is not always obvious and requires for testing, which would facilitate proper specification of FDI models. The aim of this article is to study endogeneity of selected macroeconomic variables commonly being used in FDI models in case of Visegrad countries: main recipients of FDI in CEE. The findings may be helpful in determining the structure of the actual relationship between variables, in appropriate models estimation and in forecasting as well as economic policymaking. Methodology/methods: Panel and time-series data techniques including GMM estimator, VEC models and causality tests were utilized in this study. Findings & Value added: The obtained results allow to confirm the hypothesis states the bi-directional causality between FDI and total factor productivity. Although results differ from among countries and data level of aggregation implications may be useful for policymakers in case of providing foreign capital attracting policy.Keywords: endogeneity, foreign direct investment, multi-equation models, total factor productivity
Procedia PDF Downloads 19719259 Estimating the Probability of Winning the Best Actor/Actress Award Conditional on the Best Picture Nomination with Bayesian Hierarchical Models
Authors: Svetlana K. Eden
Abstract:
Movies and TV shows have long become part of modern culture. We all have our preferred genre, story, actors, and actresses. However, can we objectively discern good acting from the bad? As laymen, we are probably not objective, but what about the Oscar academy members? Are their votes based on objective measures? Oscar academy members are probably also biased due to many factors, including their professional affiliations or advertisement exposure. Heavily advertised films bring more publicity to their cast and are likely to have bigger budgets. Because a bigger budget may also help earn a Best Picture (BP) nomination, we hypothesize that best actor/actress (BA) nominees from BP-nominated movies would have higher chances of winning the award than those BA nominees from non-BP-nominated films. To test this hypothesis, three Bayesian hierarchical models are proposed, and their performance is evaluated. The results from all three models largely support our hypothesis. Depending on the proportion of BP nominations among BA nominees, the odds ratios (estimated over expected) of winning the BA award conditional on BP nomination vary from 2.8 [0.8-7.0] to 4.3 [2.0, 15.8] for actors and from 1.5 [0.0, 12.2] to 5.4 [2.7, 14.2] for actresses.Keywords: Oscar, best picture, best actor/actress, bias
Procedia PDF Downloads 22319258 The Confounding Role of Graft-versus-Host Disease in Animal Models of Cancer Immunotherapy: A Systematic Review
Authors: Hami Ashraf, Mohammad Heydarnejad
Abstract:
Introduction: The landscape of cancer treatment has been revolutionized by immunotherapy, offering novel therapeutic avenues for diverse cancer types. Animal models play a pivotal role in the development and elucidation of these therapeutic modalities. Nevertheless, the manifestation of Graft-versus-Host Disease (GVHD) in such models poses significant challenges, muddling the interpretation of experimental data within the ambit of cancer immunotherapy. This study is dedicated to scrutinizing the role of GVHD as a confounding factor in animal models used for cancer immunotherapy, alongside proposing viable strategies to mitigate this complication. Method: Employing a systematic review framework, this study undertakes a comprehensive literature survey including academic journals in PubMed, Embase, and Web of Science databases and conference proceedings to collate pertinent research that delves into the impact of GVHD on animal models in cancer immunotherapy. The acquired studies undergo rigorous analysis and synthesis, aiming to assess the influence of GVHD on experimental results while identifying strategies to alleviate its confounding effects. Results: Findings indicate that GVHD incidence significantly skews the reliability and applicability of experimental outcomes, occasionally leading to erroneous interpretations. The literature surveyed also sheds light on various methodologies under exploration to counteract the GVHD dilemma, thereby bolstering the experimental integrity in this domain. Conclusion: GVHD's presence critically affects both the interpretation and validity of experimental findings, underscoring the imperative for strategies to curtail its confounding impacts. Current research endeavors are oriented towards devising solutions to this issue, aiming to augment the dependability and pertinence of experimental results. It is incumbent upon researchers to diligently consider and adjust for GVHD's effects, thereby enhancing the translational potential of animal model findings to clinical applications and propelling progress in the arena of cancer immunotherapy.Keywords: graft-versus-host disease, cancer immunotherapy, animal models, preclinical model
Procedia PDF Downloads 5119257 CTHTC: A Convolution-Backed Transformer Architecture for Temporal Knowledge Graph Embedding with Periodicity Recognition
Authors: Xinyuan Chen, Mohd Nizam Husen, Zhongmei Zhou, Gongde Guo, Wei Gao
Abstract:
Temporal Knowledge Graph Completion (TKGC) has attracted increasing attention for its enormous value; however, existing models lack capabilities to capture both local interactions and global dependencies simultaneously with evolutionary dynamics, while the latest achievements in convolutions and Transformers haven't been employed in this area. What’s more, periodic patterns in TKGs haven’t been fully explored either. To this end, a multi-stage hybrid architecture with convolution-backed Transformers is introduced in TKGC tasks for the first time combining the Hawkes process to model evolving event sequences in a continuous-time domain. In addition, the seasonal-trend decomposition is adopted to identify periodic patterns. Experiments on six public datasets are conducted to verify model effectiveness against state-of-the-art (SOTA) methods. An extensive ablation study is carried out accordingly to evaluate architecture variants as well as the contributions of independent components in addition, paving the way for further potential exploitation. Besides complexity analysis, input sensitivity and safety challenges are also thoroughly discussed for comprehensiveness with novel methods.Keywords: temporal knowledge graph completion, convolution, transformer, Hawkes process, periodicity
Procedia PDF Downloads 7819256 Input-Output Analysis in Laptop Computer Manufacturing
Authors: H. Z. Ulukan, E. Demircioğlu, M. Erol Genevois
Abstract:
The scope of this paper and the aim of proposed model were to apply monetary Input –Output (I-O) analysis to point out the importance of reusing know-how and other requirements in order to reduce the production costs in a manufacturing process for a laptop computer. I-O approach using the monetary input-output model is employed to demonstrate the impacts of different factors in a manufacturing process. A sensitivity analysis showing the correlation between these different factors is also presented. It is expected that the recommended model would have an advantageous effect in the cost minimization process.Keywords: input-output analysis, monetary input-output model, manufacturing process, laptop computer
Procedia PDF Downloads 39119255 A Study on Sentiment Analysis Using Various ML/NLP Models on Historical Data of Indian Leaders
Authors: Sarthak Deshpande, Akshay Patil, Pradip Pandhare, Nikhil Wankhede, Rushali Deshmukh
Abstract:
Among the highly significant duties for any language most effective is the sentiment analysis, which is also a key area of NLP, that recently made impressive strides. There are several models and datasets available for those tasks in popular and commonly used languages like English, Russian, and Spanish. While sentiment analysis research is performed extensively, however it is lagging behind for the regional languages having few resources such as Hindi, Marathi. Marathi is one of the languages that included in the Indian Constitution’s 8th schedule and is the third most widely spoken language in the country and primarily spoken in the Deccan region, which encompasses Maharashtra and Goa. There isn’t sufficient study on sentiment analysis methods based on Marathi text due to lack of available resources, information. Therefore, this project proposes the use of different ML/NLP models for the analysis of Marathi data from the comments below YouTube content, tweets or Instagram posts. We aim to achieve a short and precise analysis and summary of the related data using our dataset (Dates, names, root words) and lexicons to locate exact information.Keywords: multilingual sentiment analysis, Marathi, natural language processing, text summarization, lexicon-based approaches
Procedia PDF Downloads 7419254 Cr (VI) Adsorption on Ce0.25Zr0.75O2.nH2O-Kinetics and Thermodynamics
Authors: Carlos Alberto Rivera-corredor, Angie Dayana Vargas-Ceballos, Edison Gilpavas, Izabela Dobrosz-Gómez, Miguel Ángel Gómez-García
Abstract:
Hexavalent chromium, Cr (VI) is present in the effluents from different industries such as electroplating, mining, leather tanning, etc. This compound is of great academic and industrial concern because of its toxic and carcinogenic behavior. Its dumping to both environmental and public health for animals and humans causes serious problems in water sources. The amount of Cr (VI) in industrial wastewaters ranges from 0.5 to 270,000 mgL-1. According to the Colombian standard for water quality (NTC-813-2010), the maximum allowed concentration for the Cr (VI) in drinking water is 0.05 mg L-1. To comply with this limit, it is essential that industries treat their effluent to reduce the Cr (VI) to acceptable levels. Numerous methods have been reported for the treatment removing metal ions from aqueous solutions such as: reduction, ion exchange, electrodialysis, etc. Adsorption has become a promising method for the purification of metal ions in water, since its application corresponds with an economic and efficient technology. The absorbent selection and the kinetic and thermodynamic study of the adsorption conditions are key to the development of a suitable adsorption technology. The Ce0.25Zr0.75O2.nH2O presents higher adsorption capacity between a series of hydrated mixed oxides Ce1-xZrxO2 (x = 0, 0.25, 0.5, 0.75, 1). This work presents the kinetic and thermodynamic study of Cr (VI) adsorption on Ce0.25Zr0.75O2.nH2O. Experiments were performed under the following experimental conditions: initial Cr (VI) concentration = 25, 50 and 100 mgL-1, pH = 2, adsorbent charge = 4 gL-1, stirring time = 60 min, temperature=20, 28 and 40 °C. The Cr (VI) concentration was spectrophotometrically estimated by the method of difenilcarbazide with monitoring the absorbance at 540 nm. The Cr (VI) adsorption over hydrated Ce0.25Zr0.75O2.nH2O models was analyzed using pseudo-first and pseudo-second order kinetics. The Langmuir and Freundlich models were used to model the experimental data. The convergence between the experimental values and those predicted by the model, is expressed as a linear regression correlation coefficient (R2) and was employed as the model selection criterion. The adsorption process followed the pseudo-second order kinetic model and obeyed the Langmuir isotherm model. The thermodynamic parameters were calculated as: ΔH°=9.04 kJmol-1,ΔS°=0.03 kJmol-1 K-1, ΔG°=-0.35 kJmol-1 and indicated the endothermic and spontaneous nature of the adsorption process, governed by physisorption interactions.Keywords: adsorption, hexavalent chromium, kinetics, thermodynamics
Procedia PDF Downloads 29919253 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework
Authors: Nicola Rubino
Abstract:
This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points
Procedia PDF Downloads 27819252 Hybrid Equity Warrants Pricing Formulation under Stochastic Dynamics
Authors: Teh Raihana Nazirah Roslan, Siti Zulaiha Ibrahim, Sharmila Karim
Abstract:
A warrant is a financial contract that confers the right but not the obligation, to buy or sell a security at a certain price before expiration. The standard procedure to value equity warrants using call option pricing models such as the Black–Scholes model had been proven to contain many flaws, such as the assumption of constant interest rate and constant volatility. In fact, existing alternative models were found focusing more on demonstrating techniques for pricing, rather than empirical testing. Therefore, a mathematical model for pricing and analyzing equity warrants which comprises stochastic interest rate and stochastic volatility is essential to incorporate the dynamic relationships between the identified variables and illustrate the real market. Here, the aim is to develop dynamic pricing formulations for hybrid equity warrants by incorporating stochastic interest rates from the Cox-Ingersoll-Ross (CIR) model, along with stochastic volatility from the Heston model. The development of the model involves the derivations of stochastic differential equations that govern the model dynamics. The resulting equations which involve Cauchy problem and heat equations are then solved using partial differential equation approaches. The analytical pricing formulas obtained in this study comply with the form of analytical expressions embedded in the Black-Scholes model and other existing pricing models for equity warrants. This facilitates the practicality of this proposed formula for comparison purposes and further empirical study.Keywords: Cox-Ingersoll-Ross model, equity warrants, Heston model, hybrid models, stochastic
Procedia PDF Downloads 12919251 Cognitive Models of Future in Political Texts
Authors: Solopova Olga
Abstract:
The present paper briefly recalls theoretical preconditions for investigating cognitive-discursive models of future in political discourse. The author reviews theories and methods used for strengthening a future focus in this discourse working out two main tools – a model of future and a metaphorical scenario. The paper examines the implications of metaphorical analogies for modeling future in mass media. It argues that metaphor is not merely a rhetorical ornament in the political discourse of media regulation but a conceptual model that legislates and regulates our understanding of future.Keywords: cognitive approach, future research, political discourse, model, scenario, metaphor
Procedia PDF Downloads 39419250 Applications of Nonlinear Models to Measure and Predict Thermo Physical Properties of Binary Liquid Mixtures1, 4 Dioxane with Bromo Benzene at Various Temperatures
Authors: R. Ramesh, M. Y. M. Yunus, K. Ramesh
Abstract:
The study conducted in this research are Viscosities, η, and Densities ,ρ, of 1, 4-dioxane with Bromobenzene at different mole fractions and various temperatures in the atmospheric pressure condition. From experimentations excess volumes, VE, and deviations in viscosities, Δη, of mixtures at infinite dilutions have been obtained. The measured systems exhibited positive values of VmE and negative values of Δη. The binary mixture 1, 4 dioxane + Bromobenzene show positive VE and negative Δη with increasing temperatures. The outcomes clearly indicate that weak interactions present in mixture. It is mainly because of number and position of methyl groups exist in these aromatic hydrocarbons. These measured data tailored to the nonlinear models to derive the binary coefficients. Standard deviations have been considered between the fitted outcomes and the calculated data is helpful deliberate mixing behavior of the binary mixtures. It can conclude that in our cases, the data found with the values correlated by the corresponding models very well. The molecular interactions existing between the components and comparison of liquid mixtures were also discussed.Keywords: 1, 4 dioxane, bromobenzene, density, excess molar volume
Procedia PDF Downloads 41219249 Analysis of Capillarity Phenomenon Models in Primary and Secondary Education in Spain: A Case Study on the Design, Implementation, and Analysis of an Inquiry-Based Teaching Sequence
Authors: E. Cascarosa-Salillas, J. Pozuelo-Muñoz, C. Rodríguez-Casals, A. de Echave
Abstract:
This study focuses on improving the understanding of the capillarity phenomenon among Primary and Secondary Education students. Despite being a common concept in daily life and covered in various subjects, students’ comprehension remains limited. This work explores inquiry-based teaching methods to build a conceptual foundation of capillarity by examining the forces involved. The study adopts an inquiry-based teaching approach supported by research emphasizing the importance of modeling in science education. Scientific modeling aids students in applying knowledge across varied contexts and developing systemic thinking, allowing them to construct scientific models applicable to everyday situations. This methodology fosters the development of scientific competencies such as observation, hypothesis formulation, and communication. The research was structured as a case study with activities designed for Spanish Primary and Secondary Education students aged 9 to 13. The process included curriculum analysis, the design of an activity sequence, and its implementation in classrooms. Implementation began with questions that students needed to resolve using available materials, encouraging observation, experimentation, and the re-contextualization of activities to everyday phenomena where capillarity is observed. Data collection tools included audio and video recordings of the sessions, which were transcribed and analyzed alongside the students' written work. Students' drawings on capillarity were also collected and categorized. Qualitative analyses of the activities showed that, through inquiry, students managed to construct various models of capillarity, reflecting an improved understanding of the phenomenon. Initial activities allowed students to express prior ideas and formulate hypotheses, which were then refined and expanded in subsequent sessions. The generalization and use of graphical representations of their ideas on capillarity, analyzed alongside their written work, enabled the categorization of capillarity models: Intuitive Model: A visual and straightforward representation without explanations of how or why it occurs. Simple symbolic elements, such as arrows to indicate water rising, are used without detailed or causal understanding. It reflects an initial, immediate perception of the phenomenon, interpreted as something that happens "on its own" without delving into the microscopic level. Explanatory Intuitive Model: Students begin to incorporate causal explanations, though still limited and without complete scientific accuracy. They represent the role of materials and use basic terms such as ‘absorption’ or ‘attraction’ to describe the rise of water. This model shows a more complex understanding where the phenomenon is not only observed but also partially explained in terms of interaction, though without microscopic detail. School Scientific Model: This model reflects a more advanced and detailed understanding. Students represent the phenomenon using specific scientific concepts like ‘surface tension,’ cohesion,’ and ‘adhesion,’ including structured explanations connecting microscopic and macroscopic levels. At this level, students model the phenomenon as a coherent system, demonstrating how various forces or properties interact in the capillarity process, with representations on a microscopic level. The study demonstrated that the capillarity phenomenon can be effectively approached in class through the experimental observation of everyday phenomena, explained through guided inquiry learning. The methodology facilitated students’ construction of capillarity models and served to analyze an interaction phenomenon of different forces occurring at the microscopic level.Keywords: capillarity, inquiry-based learning, scientific modeling, primary and secondary education, conceptual understanding, Drawing analysis.
Procedia PDF Downloads 1419248 Bianchi Type- I Viscous Fluid Cosmological Models with Stiff Matter and Time Dependent Λ- Term
Authors: Rajendra Kumar Dubey
Abstract:
Einstein’s field equations with variable cosmological term Λ are considered in the presence of viscous fluid for Bianchi type I space time. Exact solutions of Einstein’s field equations are obtained by assuming cosmological term Λ Proportional to (R is a scale factor and m is constant). We observed that the shear viscosity is found to be responsible for faster removal of initial anisotropy in the universe. The physical significance of the cosmological models has also been discussed.Keywords: bianchi type, I cosmological model, viscous fluid, cosmological constant Λ
Procedia PDF Downloads 528