Search results for: decision tree model
18326 Drivers of Digital Product Innovation in Firms: An Empirical Study of Technological, Organizational, and Environmental Factors
Authors: Anne Theresa Eidhoff, Sarah E. Stief, Markus Voeth, Sarah Gundlach
Abstract:
With digitalization increasingly changing the rules of competition, firms face the need to adapt and assimilate digital technologies in order to remain competitive. Firms can choose from various possibilities to integrate digital technologies including the option to embed digital technologies aiming to innovate products or to develop digital products. However, the question of which specific factors influence a firm’s decision to pursue digital product innovation remains unanswered in research. By adopting the Technology-Organization-Environment (TOE)-framework we have designed a qualitative exploratory study including eleven German practitioners to investigate relevant contingency factors. Our results indicate that the most critical factors for a company’s decision to pursue digital product innovation can be found in the technological and environmental dimensions, namely customers, competitive pressure, technological change, as well as digitalization fit.Keywords: digital innovation, digitalization, product innovation, TOE-framework
Procedia PDF Downloads 48318325 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects
Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town
Abstract:
The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry
Procedia PDF Downloads 9218324 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery
Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas
Abstract:
The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition
Procedia PDF Downloads 15118323 Designing Equivalent Model of Floating Gate Transistor
Authors: Birinderjit Singh Kalyan, Inderpreet Kaur, Balwinder Singh Sohi
Abstract:
In this paper, an equivalent model for floating gate transistor has been proposed. Using the floating gate voltage value, capacitive coupling coefficients has been found at different bias conditions. The amount of charge present on the gate has been then calculated using the transient models of hot electron programming and Fowler-Nordheim Tunnelling. The proposed model can be extended to the transient conditions as well. The SPICE equivalent model is designed and current-voltage characteristics and Transfer characteristics are comparatively analysed. The dc current-voltage characteristics, as well as dc transfer characteristics, have been plotted for an FGMOS with W/L=0.25μm/0.375μm, the inter-poly capacitance of 0.8fF for both programmed and erased states. The Comparative analysis has been made between the present model and capacitive coefficient coupling methods which were already available.Keywords: FGMOS, floating gate transistor, capacitive coupling coefficient, SPICE model
Procedia PDF Downloads 54518322 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU
Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais
Abstract:
Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking
Procedia PDF Downloads 3618321 Effects of Merging Personal and Social Responsibility with Sports Education Model on Students' Game Performance and Responsibility
Authors: Yi-Hsiang Pan, Chen-Hui Huang, Wei-Ting Hsu
Abstract:
The purposes of the study were to understand these topics as follows: 1. To explore the effect of merging teaching personal and social responsibility (TPSR) with sports education model on students' game performance and responsibility. 2. To explore the effect of sports education model on students' game performance and responsibility. 3. To compare the difference between "merging TPSR with sports education model" and "sports education model" on students' game performance and responsibility. The participants include three high school physical education teachers and six physical education classes. Every teacher teaches an experimental group and a control group. The participants had 121 students, including 65 students in the experimental group and 56 students in the control group. The research methods had game performance assessment, questionnaire investigation, interview, focus group meeting. The research instruments include personal and social responsibility questionnaire and game performance assessment instrument. Paired t-test test and MANCOVA were used to test the difference between "merging TPSR with sports education model" and "sports education model" on students' learning performance. 1) "Merging TPSR with sports education model" showed significant improvements in students' game performance, and responsibilities with self-direction, helping others, cooperation. 2) "Sports education model" also had significant improvements in students' game performance, and responsibilities with effort, self-direction, helping others. 3.) There was no significant difference in game performance and responsibilities between "merging TPSR with sports education model" and "sports education model". 4)."Merging TPSR with sports education model" significantly improve learning atmosphere and peer relationships, it may be developed in the physical education curriculum. The conclusions were as follows: Both "Merging TPSR with sports education model" and "sports education model" can help improve students' responsibility and game performance. However, "Merging TPSR with sports education model" can reduce the competitive atmosphere in highly intensive games between students. The curricular projects of hybrid TPSR-Sport Education model is a good approach for moral character education.Keywords: curriculum and teaching model, sports self-efficacy, sport enthusiastic, character education
Procedia PDF Downloads 31318320 New Estimation in Autoregressive Models with Exponential White Noise by Using Reversible Jump MCMC Algorithm
Authors: Suparman Suparman
Abstract:
A white noise in autoregressive (AR) model is often assumed to be normally distributed. In application, the white noise usually do not follows a normal distribution. This paper aims to estimate a parameter of AR model that has a exponential white noise. A Bayesian method is adopted. A prior distribution of the parameter of AR model is selected and then this prior distribution is combined with a likelihood function of data to get a posterior distribution. Based on this posterior distribution, a Bayesian estimator for the parameter of AR model is estimated. Because the order of AR model is considered a parameter, this Bayesian estimator cannot be explicitly calculated. To resolve this problem, a method of reversible jump Markov Chain Monte Carlo (MCMC) is adopted. A result is a estimation of the parameter AR model can be simultaneously calculated.Keywords: autoregressive (AR) model, exponential white Noise, bayesian, reversible jump Markov Chain Monte Carlo (MCMC)
Procedia PDF Downloads 35618319 Numerical Pricing of Financial Options under Irrational Exercise Times and Regime-Switching Models
Authors: Mohammad Saber Rohi, Saghar Heidari
Abstract:
In this paper, we studied the pricing problem of American options under a regime-switching model with the possibility of a non-optimal exercise policy (early or late exercise time) which is called an irrational strategy. For this, we consider a Markovmodulated model for the dynamic of the underlying asset as an alternative model to the classical Balck-Scholes-Merton model (BSM) and an intensity-based model for the irrational strategy, to provide more realistic results for American option prices under the irrational behavior in real financial markets. Applying a partial differential equation (PDE) approach, the pricing problem of American options under regime-switching models can be formulated as coupled PDEs. To solve the resulting systems of PDEs in this model, we apply a finite element method as the numerical solving procedure to the resulting variational inequality. Under some appropriate assumptions, we establish the stability of the method and compare its accuracy to some recent works to illustrate the suitability of the proposed model and the accuracy of the applied numerical method for the pricing problem of American options under the regime-switching model with irrational behaviors.Keywords: irrational exercise strategy, rationality parameter, regime-switching model, American option, finite element method, variational inequality
Procedia PDF Downloads 7318318 Radical Web Text Classification Using a Composite-Based Approach
Authors: Kolade Olawande Owoeye, George R. S. Weir
Abstract:
The widespread of terrorism and extremism activities on the internet has become a major threat to the government and national securities due to their potential dangers which have necessitated the need for intelligence gathering via web and real-time monitoring of potential websites for extremist activities. However, the manual classification for such contents is practically difficult or time-consuming. In response to this challenge, an automated classification system called composite technique was developed. This is a computational framework that explores the combination of both semantics and syntactic features of textual contents of a web. We implemented the framework on a set of extremist webpages dataset that has been subjected to the manual classification process. Therein, we developed a classification model on the data using J48 decision algorithm, this is to generate a measure of how well each page can be classified into their appropriate classes. The classification result obtained from our method when compared with other states of arts, indicated a 96% success rate in classifying overall webpages when matched against the manual classification.Keywords: extremist, web pages, classification, semantics, posit
Procedia PDF Downloads 14618317 Computational Fluid Dynamics Modeling of Liquefaction of Wood and It's Model Components Using a Modified Multistage Shrinking-Core Model
Authors: K. G. R. M. Jayathilake, S. Rudra
Abstract:
Wood degradation in hot compressed water is modeled with a Computational Fluid Dynamics (CFD) code using cellulose, xylan, and lignin as model compounds. Model compounds are reacted under catalyst-free conditions in a temperature range from 250 to 370 °C. Using a simplified reaction scheme where water soluble products, methanol soluble products, char like compounds and gas are generated through intermediates with each model compound. A modified multistage shrinking core model is developed to simulate particle degradation. In the modified shrinking core model, each model compound is hydrolyzed in separate stages. Cellulose is decomposed to glucose/oligomers before producing degradation products. Xylan is decomposed through xylose and then to degradation products where lignin is decomposed into soluble products before producing the total guaiacol, organic carbon (TOC) and then char and gas. Hydrolysis of each model compound is used as the main reaction of the process. Diffusion of water monomers to the particle surface to initiate hydrolysis and dissolution of the products in water is given importance during the modeling process. In the developed model the temperature variation depends on the Arrhenius relationship. Kinetic parameters from the literature are used for the mathematical model. Meanwhile, limited initial fast reaction kinetic data limit the development of more accurate CFD models. Liquefaction results of the CFD model are analyzed and validated using the experimental data available in the literature where it shows reasonable agreement.Keywords: computational fluid dynamics, liquefaction, shrinking-core, wood
Procedia PDF Downloads 12618316 Modeling the Downstream Impacts of River Regulation on the Grand Lake Meadows Complex using Delft3D FM Suite
Authors: Jaime Leavitt, Katy Haralampides
Abstract:
Numerical modelling has been used to investigate the long-term impact of a large dam on downstream wetland areas, specifically in terms of changing sediment dynamics in the system. The Mactaquac Generating Station (MQGS) is a 672MW run-of-the-river hydroelectric facility, commissioned in 1968 on the mainstem of the Wolastoq|Saint John River in New Brunswick, Canada. New Brunswick Power owns and operates the dam and has been working closely with the Canadian Rivers Institute at UNB Fredericton on a multi-year, multi-disciplinary project investigating the impact the dam has on its surrounding environment. With focus on the downstream river, this research discusses the initialization, set-up, calibration, and preliminary results of a 2-D hydrodynamic model using the Delft3d Flexible Mesh Suite (successor of the Delft3d 4 Suite). The flexible mesh allows the model grid to be structured in the main channel and unstructured in the floodplains and other downstream regions with complex geometry. The combination of grid types improves computational time and output. As the movement of water governs the movement of sediment, the calibrated and validated hydrodynamic model was applied to sediment transport simulations, particularly of the fine suspended sediments. Several provincially significant Protected Natural Areas and federally significant National Wildlife Areas are located 60km downstream of the MQGS. These broad, low-lying floodplains and wetlands are known as the Grand Lake Meadows Complex (GLM Complex). There is added pressure to investigate the impacts of river regulation on these protected regions that rely heavily on natural river processes like sediment transport and flooding. It is hypothesized that the fine suspended sediment would naturally travel to the floodplains for nutrient deposition and replenishment, particularly during the freshet and large storms. The purpose of this research is to investigate the impacts of river regulation on downstream environments and use the model as a tool for informed decision making to protect and maintain biologically productive wetlands and floodplains.Keywords: hydrodynamic modelling, national wildlife area, protected natural area, sediment transport.
Procedia PDF Downloads 1218315 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data
Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito
Abstract:
Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement
Procedia PDF Downloads 39018314 Optimal Bayesian Chart for Controlling Expected Number of Defects in Production Processes
Abstract:
In this paper, we develop an optimal Bayesian chart to control the expected number of defects per inspection unit in production processes with long production runs. We formulate this control problem in the optimal stopping framework. The objective is to determine the optimal stopping rule minimizing the long-run expected average cost per unit time considering partial information obtained from the process sampling at regular epochs. We prove the optimality of the control limit policy, i.e., the process is stopped and the search for assignable causes is initiated when the posterior probability that the process is out of control exceeds a control limit. An algorithm in the semi-Markov decision process framework is developed to calculate the optimal control limit and the corresponding average cost. Numerical examples are presented to illustrate the developed optimal control chart and to compare it with the traditional u-chart.Keywords: Bayesian u-chart, economic design, optimal stopping, semi-Markov decision process, statistical process control
Procedia PDF Downloads 57318313 Ethical and Personality Factors and Accounting Professional Judgement
Authors: Shannon Hashemi, Alireza Daneshfar
Abstract:
Accounting ethical awareness has been widely promoted in recent years both in academia and in practice. However, the effectiveness of ethical awareness on accountants' judgment and choice of action is still debatable. This study investigates whether Machiavellianism and gender, as significant personality factors, influence the effect of ethical awareness on accountants' decision-making. Using an experiment, the results of ANOVA tests show that although introducing ethical awareness positively influences the accountants' judgment and choice of action, such an effect is significantly moderated by the accountants' Machiavellianism score and gender. Specifically, the test results show that the effect of introducing ethical awareness was higher on males with low Machiavellian score. The results also show that when the Machiavellian scores were high, the effect of ethical awareness was lower for both males and females. Applications of the results are discussed for accounting professionals as well as accounting ethics educators and researchers.Keywords: ethical awareness, accounting decision making, Machiavellianism, ANOVA, ethics, accounting education
Procedia PDF Downloads 11518312 Applying the Crystal Model Approach on Light Nuclei for Calculating Radii and Density Distribution
Authors: A. Amar
Abstract:
A new model, namely the crystal model, has been modified to calculate the radius and density distribution of light nuclei up to ⁸Be. The crystal model has been modified according to solid-state physics, which uses the analogy between nucleon distribution and atoms distribution in the crystal. The model has analytical analysis to calculate the radius where the density distribution of light nuclei has obtained from analogy of crystal lattice. The distribution of nucleons over crystal has been discussed in a general form. The equation that has been used to calculate binding energy was taken from the solid-state model of repulsive and attractive force. The numbers of the protons were taken to control repulsive force, where the atomic number was responsible for the attractive force. The parameter has been calculated from the crystal model was found to be proportional to the radius of the nucleus. The density distribution of light nuclei was taken as a summation of two clusters distribution as in ⁶Li=alpha+deuteron configuration. A test has been done on the data obtained for radius and density distribution using double folding for d+⁶,⁷Li with M3Y nucleon-nucleon interaction. Good agreement has been obtained for both the radius and density distribution of light nuclei. The model failed to calculate the radius of ⁹Be, so modifications should be done to overcome discrepancy.Keywords: nuclear physics, nuclear lattice, study nucleus as crystal, light nuclei till to ⁸Be
Procedia PDF Downloads 17818311 Cost-Effectiveness Analysis of the Use of COBLATION™ Knee Chondroplasty versus Mechanical Debridement in German Patients
Authors: Ayoade Adeyemi, Leo Nherera, Paul Trueman, Antje Emmermann
Abstract:
Background and objectives: Radiofrequency (RF) generated plasma chondroplasty is considered a promising treatment alternative to mechanical debridement (MD) with a shaver. The aim of the study was to perform a cost-effectiveness analysis comparing costs and outcomes following COBLATION chondroplasty versus mechanical debridement in patients with knee pain associated with a medial meniscus tear and idiopathic ICRS grade III focal lesion of the medial femoral condyle from a payer perspective. Methods: A decision-analytic model was developed comparing economic and clinical outcomes between the two treatment options in German patients following knee chondroplasty. Revision rates based on the frequency of repeat arthroscopy, osteotomy and conversion to total knee replacement, reimbursement costs and outcomes data over a 4-year time horizon were extracted from published literature. One-way sensitivity analyses were conducted to assess uncertainties around model parameters. Threshold analysis determined the revision rate at which model results change. All costs were reported in 2016 euros, future costs were discounted at a 3% annual rate. Results: Over a 4 year period, COBLATION chondroplasty resulted in an overall net saving cost of €461 due to a lower revision rate of 14% compared to 48% with MD. Threshold analysis showed that both options were associated with comparable costs if COBLATION revision rate was assumed to increase up to 23%. The initial procedure costs for COBLATION were higher compared to MD and outcome scores were significantly improved at 1 and 4 years post-operation versus MD. Conclusion: The analysis shows that COBLATION chondroplasty is a cost-effective option compared to mechanical debridement in the treatment of patients with a medial meniscus tear and idiopathic ICRS grade III defect of the medial femoral condyle.Keywords: COBLATION, cost-effectiveness, knee chondroplasty, mechanical debridement
Procedia PDF Downloads 39418310 Price Effect Estimation of Tobacco on Low-wage Male Smokers: A Causal Mediation Analysis
Authors: Kawsar Ahmed, Hong Wang
Abstract:
The study's goal was to estimate the causal mediation impact of tobacco tax before and after price hikes among low-income male smokers, with a particular emphasis on the effect estimating pathways framework for continuous and dichotomous variables. From July to December 2021, a cross-sectional investigation of observational data (n=739) was collected from Bangladeshi low-wage smokers. The Quasi-Bayesian technique, binomial probit model, and sensitivity analysis using a simulation of the computational tools R mediation package had been used to estimate the effect. After a price rise for tobacco products, the average number of cigarettes or bidis sticks taken decreased from 6.7 to 4.56. Tobacco product rising prices have a direct effect on low-income people's decisions to quit or lessen their daily smoking habits of Average Causal Mediation Effect (ACME) [effect=2.31, 95 % confidence interval (C.I.) = (4.71-0.00), p<0.01], Average Direct Effect (ADE) [effect=8.6, 95 percent (C.I.) = (6.8-0.11), p<0.001], and overall significant effects (p<0.001). Tobacco smoking choice is described by the mediated proportion of income effect, which is 26.1% less of following price rise. The curve of ACME and ADE is based on observational figures of the coefficients of determination that asses the model of hypothesis as the substantial consequence after price rises in the sensitivity analysis. To reduce smoking product behaviors, price increases through taxation have a positive causal mediation with income that affects the decision to limit tobacco use and promote low-income men's healthcare policy.Keywords: causal mediation analysis, directed acyclic graphs, tobacco price policy, sensitivity analysis, pathway estimation
Procedia PDF Downloads 11418309 Understanding Tacit Knowledge and DIKW
Authors: Bahadir Aydin
Abstract:
Today it is difficult to reach accurate knowledge because of mass data. This huge data makes the environment more and more caotic. Data is a main piller of intelligence. There is a close tie between knowledge and intelligence. Information gathered from different sources can be modified, interpreted and classified by using knowledge development process. This process is applied in order to attain intelligence. Within this process the effect of knowledge is crucial. Knowledge is classified as explicit and tacit knowledge. Tacit knowledge can be seen as "only the tip of the iceberg”. This tacit knowledge accounts for much more than we guess in all intelligence cycle. If the concept of intelligence scrutinized, it can be seen that it contains risks, threats as well as success. The main purpose for all organization is to be succesful by eliminating risks and threats. Therefore, there is a need to connect or fuse existing information and the processes which can be used to develop it. By the help of process the decision-maker can be presented with a clear holistic understanding, as early as possible in the decision making process. Planning, execution and assessments are the key functions that connects to information to knowledge. Altering from the current traditional reactive approach to a proactive knowledge development approach would reduce extensive duplication of work in the organization. By new approach to this process, knowledge can be used more effectively.Keywords: knowledge, intelligence cycle, tacit knowledge, KIDW
Procedia PDF Downloads 52018308 Efficient Bargaining versus Right to Manage in the Era of Liberalization
Authors: Panagiota Koliousi, Natasha Miaouli
Abstract:
We compare product and labour market liberalization under the two trade union bargaining models: the Right-to-Manage (RTM) model and the Efficient Bargaining (EB) model. The vehicle is a dynamic general equilibrium (DGE) model that incorporates two types of agents (capitalists and workers), imperfectly competitive product and labour markets. The model is solved numerically employing common parameter values and data from the euro area. A key message is that product market deregulation is favourable under any labour market structure while opting for labour market deregulation one should provide special attention to the structure of the labour market such as the bargaining system of unions. If the prevailing way of bargaining is the RTM model then restructuring both markets is beneficial for all agents.Keywords: market structure, structural reforms, trade unions, unemployment
Procedia PDF Downloads 19718307 Nudging the Criminal Justice System into Listening to Crime Victims in Plea Agreements
Authors: Dana Pugach, Michal Tamir
Abstract:
Most criminal cases end with a plea agreement, an issue whose many aspects have been discussed extensively in legal literature. One important feature, however, has gained little notice, and that is crime victims’ place in plea agreements following the federal Crime Victims Rights Act of 2004. This law has provided victims some meaningful and potentially revolutionary rights, including the right to be heard in the proceeding and a right to appeal against a decision made while ignoring the victim’s rights. While victims’ rights literature has always emphasized the importance of such right, references to this provision in the general literature about plea agreements are sparse, if existing at all. Furthermore, there are a few cases only mentioning this right. This article purports to bridge between these two bodies of legal thinking – the vast literature concerning plea agreements and victims’ rights research– by using behavioral economics. The article will, firstly, trace the possible structural reasons for the failure of this right to be materialized. Relevant incentives of all actors involved will be identified as well as their inherent consequential processes that lead to the victims’ rights malfunction. Secondly, the article will use nudge theory in order to suggest solutions that will enhance incentives for the repeat players in the system (prosecution, judges, defense attorneys) and lead to the strengthening of weaker group’s interests – the crime victims. Behavioral psychology literature recognizes that the framework in which an individual confronts a decision can significantly influence his decision. Richard Thaler and Cass Sunstein developed the idea of ‘choice architecture’ - ‘the context in which people make decisions’ - which can be manipulated to make particular decisions more likely. Choice architectures can be changed by adjusting ‘nudges,’ influential factors that help shape human behavior, without negating their free choice. The nudges require decision makers to make choices instead of providing a familiar default option. In accordance with this theory, we suggest a rule, whereby a judge should inquire the victim’s view prior to accepting the plea. This suggestion leaves the judge’s discretion intact; while at the same time nudges her not to go directly to the default decision, i.e. automatically accepting the plea. Creating nudges that force actors to make choices is particularly significant when an actor intends to deviate from routine behaviors but experiences significant time constraints, as in the case of judges and plea bargains. The article finally recognizes some far reaching possible results of the suggestion. These include meaningful changes to the earlier stages of criminal process even before reaching court, in line with the current criticism of the plea agreements machinery.Keywords: plea agreements, victims' rights, nudge theory, criminal justice
Procedia PDF Downloads 32318306 Non-Autonomous Seasonal Variation Model for Vector-Borne Disease Transferral in Kampala of Uganda
Authors: Benjamin Aina Peter, Amos Wale Ogunsola
Abstract:
In this paper, a mathematical model of malaria transmission was presented with the effect of seasonal shift, due to global fluctuation in temperature, on the increase of conveyor of the infectious disease, which probably alters the region transmission potential of malaria. A deterministic compartmental model was proposed and analyzed qualitatively. Both qualitative and quantitative approaches of the model were considered. The next-generation matrix is employed to determine the basic reproduction number of the model. Equilibrium points of the model were determined and analyzed. The numerical simulation is carried out using Excel Micro Software to validate and support the qualitative results. From the analysis of the result, the optimal temperature for the transmission of malaria is between and . The result also shows that an increase in temperature due to seasonal shift gives rise to the development of parasites which consequently leads to an increase in the widespread of malaria transmission in Kampala. It is also seen from the results that an increase in temperature leads to an increase in the number of infectious human hosts and mosquitoes.Keywords: seasonal variation, indoor residual spray, efficacy of spray, temperature-dependent model
Procedia PDF Downloads 16918305 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: linked open data, information integration, digital libraries, data mining
Procedia PDF Downloads 42818304 A Two Stage Stochastic Mathematical Model for the Tramp Ship Routing with Time Windows Problem
Authors: Amin Jamili
Abstract:
Nowadays, the majority of international trade in goods is carried by sea, and especially by ships deployed in the industrial and tramp segments. This paper addresses routing the tramp ships and determining the schedules including the arrival times to the ports, berthing times at the ports, and the departure times in an operational planning level. In the operational planning level, the weather can be almost exactly forecasted, however in some routes some uncertainties may remain. In this paper, the voyaging times between some of the ports are considered to be uncertain. To that end, a two-stage stochastic mathematical model is proposed. Moreover, a case study is tested with the presented model. The computational results show that this mathematical model is promising and can represent acceptable solutions.Keywords: routing, scheduling, tram ships, two stage stochastic model, uncertainty
Procedia PDF Downloads 43818303 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 18918302 Conduction Model Compatible for Multi-Physical Domain Dynamic Investigations: Bond Graph Approach
Abstract:
In the current paper, a domain independent conduction model compatible for multi-physical system dynamic investigations is suggested. By means of a port-based approach, a classical nonlinear conduction model containing physical states is first represented. A compatible discrete configuration of the thermal domain in line with the elastic domain is then generated through the enhancement of the configuration of the conventional thermal element. The presented simulation results of a sample structure indicate that the suggested conductive model can cover a wide range of dynamic behavior of the thermal domain.Keywords: multi-physical domain, conduction model, port based modeling, dynamic interaction, physical modeling
Procedia PDF Downloads 27718301 Enhancement of Indexing Model for Heterogeneous Multimedia Documents: User Profile Based Approach
Authors: Aicha Aggoune, Abdelkrim Bouramoul, Mohamed Khiereddine Kholladi
Abstract:
Recent research shows that user profile as important element can improve heterogeneous information retrieval with its content. In this context, we present our indexing model for heterogeneous multimedia documents. This model is based on the combination of user profile to the indexing process. The general idea of our proposal is to operate the common concepts between the representation of a document and the definition of a user through his profile. These two elements will be added as additional indexing entities to enrich the heterogeneous corpus documents indexes. We have developed IRONTO domain ontology allowing annotation of documents. We will present also the developed tool validating the proposed model.Keywords: indexing model, user profile, multimedia document, heterogeneous of sources, ontology
Procedia PDF Downloads 34918300 An Improved C-Means Model for MRI Segmentation
Authors: Ying Shen, Weihua Zhu
Abstract:
Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.Keywords: magnetic resonance image (MRI), c-means model, image segmentation, information entropy
Procedia PDF Downloads 22818299 Evaluation Framework for Investments in Rail Infrastructure Projects
Authors: Dimitrios J. Dimitriou, Maria F. Sartzetaki
Abstract:
Transport infrastructures are high-cost, long-term investments that serve as vital foundations for the operation of a region or nation and are essential to a country’s or business’s economic development and prosperity, by improving well-being and generating jobs and income. The development of appropriate financing options is of key importance in the decision making process in order develop viable transport infrastructures. The development of transport infrastructure has increasingly been shifting toward alternative methods of project financing such as Public Private Partnership (PPPs) and hybrid forms. In this paper, a methodological decision-making framework based on the evaluation of the financial viability of transportation infrastructure for different financial schemes is presented. The framework leads to an assessment of the financial viability which can be achieved by performing various financing scenarios analyses. To illustrate the application of the proposed methodology, a case study of rail transport infrastructure financing scenario analysis in Greece is developed.Keywords: rail transport infrastructure, financial viability, scenario analysis, rail project feasibility
Procedia PDF Downloads 27918298 Exchange Rate, Market Size and Human Capital Nexus Foreign Direct Investment: A Bound Testing Approach for Pakistan
Authors: Naveed Iqbal Chaudhry, Mian Saqib Mehmood, Asif Mehmood
Abstract:
This study investigates the motivators of foreign direct investment (FDI) which will provide a panacea tool and ground breaking results related to it in case of Pakistan. The study considers exchange rate, market size and human capital as the motivators for attracting FDI. In this regard, time series data on annual basis has been collected for the period 1985–2010 and an Augmented Dickey–Fuller (ADF) and Phillips–Perron (PP) unit root tests are utilized to determine the stationarity of the variables. A bound testing approach to co-integration was applied because the variables included in the model are at I(1) – first level stationary. The empirical findings of this study confirm the long run relationship among the variables. However, market size and human capital have strong positive and significant impact, in short and long-run, for attracting FDI but exchange rate shows negative impact in this regard. The significant negative coefficient of the ECM indicates that it converges towards equilibrium. CUSUM and CUSUMSQ tests plots are with in the lines of critical value, which indicates the stability of the estimated parameters. However, this model can be used by Pakistan in policy and decision making. For achieving higher economic growth and economies of scale, the country should concentrate on the ingredients of this study so that it could attract more FDI as compared to the other countries.Keywords: ARDL, CUSUM and CUSUMSQ tests, ECM, exchange rate, FDI, human capital, market size, Pakistan
Procedia PDF Downloads 39518297 Business Intelligence for Profiling of Telecommunication Customer
Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro
Abstract:
Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.Keywords: business intelligence, customer segmentation, data warehouse, data mining
Procedia PDF Downloads 486