Search results for: cluster model approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26806

Search results for: cluster model approach

24766 Supervisory Board in the Governance of Cooperatives: Disclosing Power Elements in the Selection of Directors

Authors: Kari Huhtala, Iiro Jussila

Abstract:

The supervisory board is assumed to use power in the governance of a firm, but the actual use of power has been scantly investigated. The research question of the paper is “How does the supervisory board use power in the selection of the board of directors”. The data stem from 11 large Finnish agricultural cooperatives. The research approach was qualitative including semi-structured interviews of the board of directors and supervisory board chairpersons. The results were analyzed and interpreted against theories of social power. As a result, the use of power is approached from two perspectives: (1) formal position-based authority and (2) informal power. Central elements of power were the mandate of the supervisory board, the role of the supervisory board, the supervisory board chair, the nomination committee, collaboration between the supervisory board and the board of directors, the role of regions and the role of the board of directors. The study contributes to the academic discussion on corporate governance in cooperatives and on the supervisory board in the context of the two-tier model. Additional research of the model in other countries and of other types of cooperatives would further academic understanding of supervisory boards.

Keywords: board, co-operative, supervisory board, selection, director

Procedia PDF Downloads 157
24765 Integration of an Innovative Complementary Approach Inspired by Clinical Hypnosis into Oncology Care: Nurses’ Perception of Comfort Talk

Authors: Danny Hjeij, Karine Bilodeau, Caroline Arbour

Abstract:

Background: Chemotherapy infusions often lead to a cluster of co-occurring and difficult-to-treat symptoms (nausea, tingling, etc.), which may negatively impact the treatment experience at the outpatient clinic. Although several complementary approaches have shown beneficial effects for chemotherapy-induced symptom management, they are not easily implementable during chemotherapy infusion. In response to this limitation, comfort talk (CT), a simple, fast conversational method inspired by the language principles of clinical hypnosis, is known to optimize the management of symptoms related to antineoplastic treatments. However, the perception of nurses who have had to integrate this practice into their care has never been documented. Study design: A qualitative descriptive study with iterative content analysis was conducted among oncology nurses working in a chemotherapy outpatient clinic who had previous experience with CT. Semi-structured interviews were conducted by phone, using a pre-tested interview guide and a sociodemographic survey to document their perception of CT. The conceptual framework. Results: A total of six nurses (4 women, 2 men) took part in the interviews (N=6). The average age of participants was 49 years (36-61 years). Participants had an average of 24 years of experience (10-38 years) as a nurse, including 14.5 years in oncology (5-32 years). Data saturation (i.e., redundancy of words) was observed around the fifth interview. A sixth interview was conducted as confirmation. Six themes emerged: two addressing contextual and organizational obstacles at the chemotherapy outpatient clinic, and three addressing the added value of CT for oncology nursing care. Specific themes included: 1) the outpatient oncology clinic, a saturated care setting, 2) the keystones that support the integration of CT into care, 3) added value for patients, 4) a positive and rewarding experience for nurses, 5) collateral benefits, and 6) CT an approach to consider during the COVID-19 pandemic. Conclusion: For the first time, this study describes nurses' perception of the integration of CT into the care surrounding the administration of chemotherapy at the outpatient oncology clinic. In summary, contextual and organizational difficulties, as well as the lack of training, are among the main obstacles that could hinder the integration of CT in oncology. Still, the experience was reported mostly as positive. Indeed, nurses saw HC as an added value to patient care and meeting their need for holistic care. HC also appears to be beneficial for patients on several levels (for pain management in particular). Results will be used to inform future knowledge transfer activities related to CT in oncology nursing.

Keywords: cancer, chemotherapy, comfort talk, oncology nursing role

Procedia PDF Downloads 69
24764 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest

Authors: Bharatendra Rai

Abstract:

Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).

Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error

Procedia PDF Downloads 305
24763 Positioning Organisational Culture in Knowledge Management Research

Authors: Said Al Saifi

Abstract:

This paper proposes a conceptual model for understanding the impact of organisational culture on knowledge management processes and their link with organisational performance. It is suggested that organisational culture should be assessed as a multi-level construct comprising artifacts, espoused beliefs and values, and underlying assumptions. A holistic view of organisational culture and knowledge management processes, and their link with organisational performance, is presented. A comprehensive review of previous literature was undertaken in the development of the conceptual model. Taken together, the literature and the proposed model reveal possible relationships between organisational culture, knowledge management processes, and organisational performance. Potential implications of organisational culture levels for the creation, sharing, and application of knowledge are elaborated. In addition, the paper offers possible new insight into the impact of organisational culture on various knowledge management processes and their link with organisational performance. A number of possible relationships between organisational culture factors, knowledge management processes, and their link with organisational performance were employed to examine such relationships. The research model highlights the multi-level components of organisational culture. These are: the artifacts, the espoused beliefs and values, and the underlying assumptions. Through a conceptualisation of the relationships between organisational culture, knowledge management processes, and organisational performance, the study provides practical guidance for practitioners during the implementation of knowledge management processes. The focus of previous research on knowledge management has been on understanding organisational culture from the limited perspective of promoting knowledge creation and sharing. This paper proposes a more comprehensive approach to understanding organisational culture in that it draws on artifacts, espoused beliefs and values, and underlying assumptions, and reveals their impact on the creation, sharing, and application of knowledge which can affect overall organisational performance.

Keywords: knowledge application, knowledge creation, knowledge management, knowledge sharing, organisational culture, organisational performance

Procedia PDF Downloads 558
24762 Design for Error-Proofing Assembly: A Systematic Approach to Prevent Assembly Issues since Early Design Stages, an Industrial Case Study

Authors: Gabriela Estrada, Joaquim Lloveras

Abstract:

Design for error-proofing assembly is a new DFX approach to prevent assembly issues since early design stages. Assembly issues that can happen during the life phases of a system such as: production, installation, operation, and replacement phases. This prevention is possible by designing the product with poka-yoke or error-proofing characteristics. This approach guide designers to make decisions based on poka-yoke assembly design requirements. As a result of applying these requirements designers are able to create solutions to prevent assembly issues for the product in development stage. This paper integrates the needs to design products in an error proofing way into the systematic approach of design process by Pahl and Beitz. A case study is presented applying this approach.

Keywords: poka-yoke, error-proofing, assembly issues, design process, life phases of a system

Procedia PDF Downloads 360
24761 Design for Error-Proofing Assembly: A Systematic Approach to Prevent Assembly Issues since Early Design Stages. An Industry Case Study

Authors: Gabriela Estrada, Joaquim Lloveras

Abstract:

Design for error-proofing assembly is a new DFX approach to prevent assembly issues since early design stages. Assembly issues that can happen during the life phases of a system such as: production, installation, operation and replacement phases. This prevention is possible by designing the product with poka-yoke or error-proofing characteristics. This approach guide designers to make decisions based on poka-yoke assembly design requirements. As a result of applying these requirements designers are able to create solutions to prevent assembly issues for the product in development stage. This paper integrates the needs to design products in an error proofing way into the systematic approach of design process by Pahl and Beitz. A case study is presented applying this approach.

Keywords: poka-yoke, error-proofing, assembly issues, design process, life phases of a system

Procedia PDF Downloads 302
24760 Downside Risk Analysis of the Nigerian Stock Market: A Value at Risk Approach

Authors: Godwin Chigozie Okpara

Abstract:

This paper using standard GARCH, EGARCH, and TARCH models on day of the week return series (of 246 days) from the Nigerian Stock market estimated the model variants’ VaR. An asymmetric return distribution and fat-tail phenomenon in financial time series were considered by estimating the models with normal, student t and generalized error distributions. The analysis based on Akaike Information Criterion suggests that the EGARCH model with student t innovation distribution can furnish more accurate estimate of VaR. In the light of this, we apply the likelihood ratio tests of proportional failure rates to VaR derived from EGARCH model in order to determine the short and long positions VaR performances. The result shows that as alpha ranges from 0.05 to 0.005 for short positions, the failure rate significantly exceeds the prescribed quintiles while it however shows no significant difference between the failure rate and the prescribed quantiles for long positions. This suggests that investors and portfolio managers in the Nigeria stock market have long trading position or can buy assets with concern on when the asset prices will fall. Precisely, the VaR estimates for the long position range from -4.7% for 95 percent confidence level to -10.3% for 99.5 percent confidence level.

Keywords: downside risk, value-at-risk, failure rate, kupiec LR tests, GARCH models

Procedia PDF Downloads 427
24759 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach

Authors: Gong Zhilin, Jing Yang, Jian Yin

Abstract:

The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).

Keywords: credit card, data mining, fraud detection, money transactions

Procedia PDF Downloads 114
24758 Numerical Approach of RC Structural MembersExposed to Fire and After-Cooling Analysis

Authors: Ju-young Hwang, Hyo-Gyoung Kwak, Hong Jae Yim

Abstract:

This paper introduces a numerical analysis method for reinforced-concrete (RC) structures exposed to fire and compares the result with experimental results. The proposed analysis method for RC structure under the high temperature consists of two procedures. First step is to decide the temperature distribution across the section through the heat transfer analysis by using the time-temperature curve. After determination of the temperature distribution, the nonlinear analysis is followed. By considering material and geometrical non-linearity with the temperature distribution, nonlinear analysis predicts the behavior of RC structure under the fire by the exposed time. The proposed method is validated by the comparison with the experimental results. Finally, Prediction model to describe the status of after-cooling concrete can also be introduced based on the results of additional experiment. The product of this study is expected to be embedded for smart structure monitoring system against fire in u-City.

Keywords: RC structures, heat transfer analysis, nonlinear analysis, after-cooling concrete model

Procedia PDF Downloads 351
24757 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 219
24756 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data

Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates

Abstract:

Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.

Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.

Procedia PDF Downloads 80
24755 Probabilistic Building Life-Cycle Planning as a Strategy for Sustainability

Authors: Rui Calejo Rodrigues

Abstract:

Building Refurbishing and Maintenance is a major area of knowledge ultimately dispensed to user/occupant criteria. The optimization of the service life of a building needs a special background to be assessed as it is one of those concepts that needs proficiency to be implemented. ISO 15686-2 Buildings and constructed assets - Service life planning: Part 2, Service life prediction procedures, states a factorial method based on deterministic data for building components life span. Major consequences result on a deterministic approach because users/occupants are not sensible to understand the end of components life span and so simply act on deterministic periods and so costly and resources consuming solutions do not meet global targets of planet sustainability. The estimation of 2 thousand million conventional buildings in the world, if submitted to a probabilistic method for service life planning rather than a deterministic one provide an immense amount of resources savings. Since 1989 the research team nowadays stating for CEES–Center for Building in Service Studies developed a methodology based on Montecarlo method for probabilistic approach regarding life span of building components, cost and service life care time spans. The research question of this deals with the importance of probabilistic approach of buildings life planning compared with deterministic methods. It is presented the mathematic model developed for buildings probabilistic lifespan approach and experimental data is obtained to be compared with deterministic data. Assuming that buildings lifecycle depends a lot on component replacement this methodology allows to conclude on the global impact of fixed replacements methodologies such as those on result of deterministic models usage. Major conclusions based on conventional buildings estimate are presented and evaluated under a sustainable perspective.

Keywords: building components life cycle, building maintenance, building sustainability, Montecarlo Simulation

Procedia PDF Downloads 193
24754 A Basic Metric Model: Foundation for an Evidence-Based HRM System

Authors: K. M. Anusha, R. Krishnaveni

Abstract:

Crossing a decade of the 21st century, the paradigm of human resources can be seen evolving with the strategic gene induced into it. There seems to be a radical shift descending as the corporate sector calls on its HR team to become strategic rather than administrative. This transferal eventually requires the metrics employed by these HR teams not to be just operationally reactive but to be aligned to an evidence-based strategic thinking. Realizing the growing need for a prescriptive metric model for effective HR analytics, this study has designed a conceptual framework for a basic metric model that can assist IT-HRM professionals to transition to a practice of evidence-based decision-making to enhance organizational performance.

Keywords: metric model, evidence based HR, HR analytics, strategic HR practices, IT sector

Procedia PDF Downloads 389
24753 New Approach to Construct Phylogenetic Tree

Authors: Ouafae Baida, Najma Hamzaoui, Maha Akbib, Abdelfettah Sedqui, Abdelouahid Lyhyaoui

Abstract:

Numerous scientific works present various methods to analyze the data for several domains, specially the comparison of classifications. In our recent work, we presented a new approach to help the user choose the best classification method from the results obtained by every method, by basing itself on the distances between the trees of classification. The result of our approach was in the form of a dendrogram contains methods as a succession of connections. This approach is much needed in phylogeny analysis. This discipline is intended to analyze the sequences of biological macro molecules for information on the evolutionary history of living beings, including their relationship. The product of phylogeny analysis is a phylogenetic tree. In this paper, we recommend the use of a new method of construction the phylogenetic tree based on comparison of different classifications obtained by different molecular genes.

Keywords: hierarchical classification, classification methods, structure of tree, genes, phylogenetic analysis

Procedia PDF Downloads 491
24752 A Fully Coupled Thermo-Hydraulic Mechanical Elastoplastic Damage Constitutive Model for Porous Fractured Medium during CO₂ Injection

Authors: Nikolaos Reppas, Yilin Gui

Abstract:

A dual-porosity finite element-code will be presented for the stability analysis of the wellbore during CO₂ injection. An elastoplastic damage response will be considered to the model. The Finite Element Method (FEM) will be validated using experimental results from literature or from experiments that are planned to be undertaken at Newcastle University. The main target of the research paper is to present a constitutive model that can help industries to safely store CO₂ in geological rock formations and forecast any changes on the surrounding rock of the wellbore. The fully coupled elastoplastic damage Thermo-Hydraulic-Mechanical (THM) model will determine the pressure and temperature of the injected CO₂ as well as the size of the radius of the wellbore that can make the Carbon Capture and Storage (CCS) procedure more efficient.

Keywords: carbon capture and storage, Wellbore stability, elastoplastic damage response for rock, constitutive THM model, fully coupled thermo-hydraulic-mechanical model

Procedia PDF Downloads 160
24751 Model Updating Based on Modal Parameters Using Hybrid Pattern Search Technique

Authors: N. Guo, C. Xu, Z. C. Yang

Abstract:

In order to ensure the high reliability of an aircraft, the accurate structural dynamics analysis has become an indispensable part in the design of an aircraft structure. Therefore, the structural finite element model which can be used to accurately calculate the structural dynamics and their transfer relations is the prerequisite in structural dynamic design. A dynamic finite element model updating method is presented to correct the uncertain parameters of the finite element model of a structure using measured modal parameters. The coordinate modal assurance criterion is used to evaluate the correlation level at each coordinate over the experimental and the analytical mode shapes. Then, the weighted summation of the natural frequency residual and the coordinate modal assurance criterion residual is used as the objective function. Moreover, the hybrid pattern search (HPS) optimization technique, which synthesizes the advantages of pattern search (PS) optimization technique and genetic algorithm (GA), is introduced to solve the dynamic FE model updating problem. A numerical simulation and a model updating experiment for GARTEUR aircraft model are performed to validate the feasibility and effectiveness of the present dynamic model updating method, respectively. The updated results show that the proposed method can be successfully used to modify the incorrect parameters with good robustness.

Keywords: model updating, modal parameter, coordinate modal assurance criterion, hybrid genetic/pattern search

Procedia PDF Downloads 143
24750 The Link between Money Market and Economic Growth in Nigeria: Vector Error Correction Model Approach

Authors: Uyi Kizito Ehigiamusoe

Abstract:

The paper examines the impact of money market on economic growth in Nigeria using data for the period 1980-2012. Econometrics techniques such as Ordinary Least Squares Method, Johanson’s Co-integration Test and Vector Error Correction Model were used to examine both the long-run and short-run relationship. Evidence from the study suggest that though a long-run relationship exists between money market and economic growth, but the present state of the Nigerian money market is significantly and negatively related to economic growth. The link between the money market and the real sector of the economy remains very weak. This implies that the market is not yet developed enough to produce the needed growth that will propel the Nigerian economy because of several challenges. It was therefore recommended that government should create the appropriate macroeconomic policies, legal framework and sustain the present reforms with a view to developing the market so as to promote productive activities, investments, and ultimately economic growth.

Keywords: economic growth, investments, money market, money market challenges, money market instruments

Procedia PDF Downloads 329
24749 Use of Technology Based Intervention for Continuous Professional Development of Teachers in Pakistan

Authors: Rabia Aslam

Abstract:

Overwhelming evidence from all around the world suggests that high-quality teacher professional development facilitates the improvement of teaching practices which in turn could improve student learning outcomes. The new Continuous Professional Development (CPD) model for primary school teachers in Punjab uses a blended approach in which pedagogical content knowledge is delivered through technology (high-quality instructional videos and lesson plans delivered to school tablets or mobile phones) with face-to-face support by Assistant Education Officers (AEOs). The model also develops Communities of Practice operationalized through formal meetings led by the AEOs and informal interactions through social media groups to provide opportunities for teachers to engage with each other and share their ideas, reflect on learning, and come up with solutions to issues they experience. Using Kirkpatrick’s 4 levels of the learning evaluation model, this paper investigates how school tablets and teacher mobile phones may act as transformational cultural tools to potentially expand perceptions and access to teaching and learning resources and explore some of the affordances of social media (Facebook, WhatsApp groups) in learning in an informal context. The results will be used to inform policy-level decisions on what shape could CPD of all teachers take in the context of a developing country like Pakistan.

Keywords: CPD, teaching & learning, blended learning, learning technologies

Procedia PDF Downloads 64
24748 Neural Graph Matching for Modification Similarity Applied to Electronic Document Comparison

Authors: Po-Fang Hsu, Chiching Wei

Abstract:

In this paper, we present a novel neural graph matching approach applied to document comparison. Document comparison is a common task in the legal and financial industries. In some cases, the most important differences may be the addition or omission of words, sentences, clauses, or paragraphs. However, it is a challenging task without recording or tracing the whole edited process. Under many temporal uncertainties, we explore the potentiality of our approach to proximate the accurate comparison to make sure which element blocks have a relation of edition with others. In the beginning, we apply a document layout analysis that combines traditional and modern technics to segment layouts in blocks of various types appropriately. Then we transform this issue into a problem of layout graph matching with textual awareness. Regarding graph matching, it is a long-studied problem with a broad range of applications. However, different from previous works focusing on visual images or structural layout, we also bring textual features into our model for adapting this domain. Specifically, based on the electronic document, we introduce an encoder to deal with the visual presentation decoding from PDF. Additionally, because the modifications can cause the inconsistency of document layout analysis between modified documents and the blocks can be merged and split, Sinkhorn divergence is adopted in our neural graph approach, which tries to overcome both these issues with many-to-many block matching. We demonstrate this on two categories of layouts, as follows., legal agreement and scientific articles, collected from our real-case datasets.

Keywords: document comparison, graph matching, graph neural network, modification similarity, multi-modal

Procedia PDF Downloads 161
24747 New Dynamic Constitutive Model for OFHC Copper Film

Authors: Jin Sung Kim, Hoon Huh

Abstract:

The material properties of OFHC copper film was investigated with the High-Speed Material Micro Testing Machine (HSMMTM) at the high strain rates. The rate-dependent stress-strain curves from the experiment and the Johnson-Cook curve fitting showed large discrepancies as the plastic strain increases since the constitutive model implies no rate-dependent strain hardening effect. A new constitutive model was proposed in consideration of rate-dependent strain hardening effect. The strain rate hardening term in the new constitutive model consists of the strain rate sensitivity coefficients of the yield strength and strain hardening.

Keywords: rate dependent material properties, dynamic constitutive model, OFHC copper film, strain rate

Procedia PDF Downloads 472
24746 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing

Authors: Kedar Hardikar, Joe Varghese

Abstract:

Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applications

Keywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.

Procedia PDF Downloads 118
24745 From Intuitive to Constructive Audit Risk Assessment: A Complementary Approach to CAATTs Adoption

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

The use of the audit risk model in auditing has faced limitations and difficulties, leading auditors to rely on a conceptual level of its application. The qualitative approach to assessing risks has resulted in different risk assessments, affecting the quality of audits and decision-making on the adoption of CAATTs. This study aims to investigate risk factors impacting the implementation of the audit risk model and propose a complementary risk-based instrument (KRIs) to form substance risk judgments and mitigate against heightened risk of material misstatement (RMM). The study addresses the question of how risk factors impact the implementation of the audit risk model, improve risk judgments, and aid in the adoption of CAATTs. The study uses a three-stage scale development procedure involving a pretest and subsequent study with two independent samples. The pretest involves an exploratory factor analysis, while the subsequent study employs confirmatory factor analysis for construct validation. Additionally, the authors test the ability of the KRIs to predict audit efforts needed to mitigate against heightened RMM. Data was collected through two independent samples involving 767 participants. The collected data was analyzed using exploratory factor analysis and confirmatory factor analysis to assess scale validity and construct validation. The suggested KRIs, comprising two risk components and seventeen risk items, are found to have high predictive power in determining audit efforts needed to reduce RMM. The study validates the suggested KRIs as an effective instrument for risk assessment and decision-making on the adoption of CAATTs. This study contributes to the existing literature by implementing a holistic approach to risk assessment and providing a quantitative expression of assessed risks. It bridges the gap between intuitive risk evaluation and the theoretical domain, clarifying the mechanism of risk assessments. It also helps improve the uniformity and quality of risk assessments, aiding audit standard-setters in issuing updated guidelines on CAATT adoption. A few limitations and recommendations for future research should be mentioned. First, the process of developing the scale was conducted in the Israeli auditing market, which follows the International Standards on Auditing (ISAs). Although ISAs are adopted in European countries, for greater generalization, future studies could focus on other countries that adopt additional or local auditing standards. Second, this study revealed risk factors that have a material impact on the assessed risk. However, there could be additional risk factors that influence the assessment of the RMM. Therefore, future research could investigate other risk segments, such as operational and financial risks, to bring a broader generalizability to our results. Third, although the sample size in this study fits acceptable scale development procedures and enables drawing conclusions from the body of research, future research may develop standardized measures based on larger samples to reduce the generation of equivocal results and suggest an extended risk model.

Keywords: audit risk model, audit efforts, CAATTs adoption, key risk indicators, sustainability

Procedia PDF Downloads 62
24744 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach

Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi

Abstract:

Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.

Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.

Procedia PDF Downloads 57
24743 Improving the Quantification Model of Internal Control Impact on Banking Risks

Authors: M. Ndaw, G. Mendy, S. Ouya

Abstract:

Risk management in banking sector is a key issue linked to financial system stability and its importance has been elevated by technological developments and emergence of new financial instruments. In this paper, we improve the model previously defined for quantifying internal control impact on banking risks by automatizing the residual criticality estimation step of FMECA. For this, we defined three equations and a maturity coefficient to obtain a mathematical model which is tested on all banking processes and type of risks. The new model allows an optimal assessment of residual criticality and improves the correlation rate that has become 98%.

Keywords: risk, control, banking, FMECA, criticality

Procedia PDF Downloads 312
24742 Formulation of Extended-Release Gliclazide Tablet Using a Mathematical Model for Estimation of Hypromellose

Authors: Farzad Khajavi, Farzaneh Jalilfar, Faranak Jafari, Leila Shokrani

Abstract:

Formulation of gliclazide in the form of extended-release tablet in 30 and 60 mg dosage forms was performed using hypromellose (HPMC K4M) as a retarding agent. Drug-release profiles were investigated in comparison with references Diamicron MR 30 and 60 mg tablets. The effect of size of powder particles, the amount of hypromellose in formulation, hardness of tablets, and also the effect of halving the tablets were investigated on drug release profile. A mathematical model which describes hypromellose behavior in initial times of drug release was proposed for the estimation of hypromellose content in modified-release gliclazide 60 mg tablet. This model is based on erosion of hypromellose in dissolution media. The model is applicable to describe release profiles of insoluble drugs. Therefore, by using dissolved amount of drug in initial times of dissolution and the model, the amount of hypromellose in formulation can be predictable. The model was used to predict the HPMC K4M content in modified-release gliclazide 30 mg and extended-release quetiapine 200 mg tablets.

Keywords: Gliclazide, hypromellose, drug release, modified-release tablet, mathematical model

Procedia PDF Downloads 205
24741 A Positive Neuroscience Perspective for Child Development and Special Education

Authors: Amedeo D'Angiulli, Kylie Schibli

Abstract:

Traditionally, children’s brain development research has emphasized the limitative aspects of disability and impairment, electing as an explanatory model the classical clinical notions of brain lesion or functional deficit. In contrast, Positive Educational Neuroscience (PEN) is a new approach that emphasizes strengths and human flourishing related to the brain by exploring how learning practices have the potential to enhance neurocognitive flexibility through neuroplastic overcompensation. This mini-review provides an overview of PEN and shows how it links to the concept of neurocognitive flexibility. We provide examples of how the present concept of neurocognitive flexibility can be applied to special education by exploring examples of neuroplasticity in the learning domain, including: (1) learning to draw in congenitally totally blind children, and (2) music training in children from disadvantaged neighborhoods. PEN encourages educators to focus on children’s strengths by recognizing the brain’s capacity for positive change and to incorporate activities that support children’s individual development.

Keywords: neurocognitive development, positive educational neuroscience, sociocultural approach, special education

Procedia PDF Downloads 229
24740 Scientific Recommender Systems Based on Neural Topic Model

Authors: Smail Boussaadi, Hassina Aliane

Abstract:

With the rapid growth of scientific literature, it is becoming increasingly challenging for researchers to keep up with the latest findings in their fields. Academic, professional networks play an essential role in connecting researchers and disseminating knowledge. To improve the user experience within these networks, we need effective article recommendation systems that provide personalized content.Current recommendation systems often rely on collaborative filtering or content-based techniques. However, these methods have limitations, such as the cold start problem and difficulty in capturing semantic relationships between articles. To overcome these challenges, we propose a new approach that combines BERTopic (Bidirectional Encoder Representations from Transformers), a state-of-the-art topic modeling technique, with community detection algorithms in a academic, professional network. Experiences confirm our performance expectations by showing good relevance and objectivity in the results.

Keywords: scientific articles, community detection, academic social network, recommender systems, neural topic model

Procedia PDF Downloads 80
24739 A Data-Driven Optimal Control Model for the Dynamics of Monkeypox in a Variable Population with a Comprehensive Cost-Effectiveness Analysis

Authors: Martins Onyekwelu Onuorah, Jnr Dahiru Usman

Abstract:

Introduction: In the realm of public health, the threat posed by Monkeypox continues to elicit concern, prompting rigorous studies to understand its dynamics and devise effective containment strategies. Particularly significant is its recurrence in variable populations, such as the observed outbreak in Nigeria in 2022. In light of this, our study undertakes a meticulous analysis, employing a data-driven approach to explore, validate, and propose optimized intervention strategies tailored to the distinct dynamics of Monkeypox within varying demographic structures. Utilizing a deterministic mathematical model, we delved into the intricate dynamics of Monkeypox, with a particular focus on a variable population context. Our qualitative analysis provided insights into the disease-free equilibrium, revealing its stability when R0 is less than one and discounting the possibility of backward bifurcation, as substantiated by the presence of a single stable endemic equilibrium. The model was rigorously validated using real-time data from the Nigerian 2022 recorded cases for Epi weeks 1 – 52. Transitioning from qualitative to quantitative, we augmented our deterministic model with optimal control, introducing three time-dependent interventions to scrutinize their efficacy and influence on the epidemic's trajectory. Numerical simulations unveiled a pronounced impact of the interventions, offering a data-supported blueprint for informed decision-making in containing the disease. A comprehensive cost-effectiveness analysis employing the Infection Averted Ratio (IAR), Average Cost-Effectiveness Ratio (ACER), and Incremental Cost-Effectiveness Ratio (ICER) facilitated a balanced evaluation of the interventions’ economic and health impacts. In essence, our study epitomizes a holistic approach to understanding and mitigating Monkeypox, intertwining rigorous mathematical modeling, empirical validation, and economic evaluation. The insights derived not only bolster our comprehension of Monkeypox's intricate dynamics but also unveil optimized, cost-effective interventions. This integration of methodologies and findings underscores a pivotal stride towards aligning public health imperatives with economic sustainability, marking a significant contribution to global efforts in combating infectious diseases.

Keywords: monkeypox, equilibrium states, stability, bifurcation, optimal control, cost-effectiveness

Procedia PDF Downloads 54
24738 Elemental Graph Data Model: A Semantic and Topological Representation of Building Elements

Authors: Yasmeen A. S. Essawy, Khaled Nassar

Abstract:

With the rapid increase of complexity in the building industry, professionals in the A/E/C industry were forced to adopt Building Information Modeling (BIM) in order to enhance the communication between the different project stakeholders throughout the project life cycle and create a semantic object-oriented building model that can support geometric-topological analysis of building elements during design and construction. This paper presents a model that extracts topological relationships and geometrical properties of building elements from an existing fully designed BIM, and maps this information into a directed acyclic Elemental Graph Data Model (EGDM). The model incorporates BIM-based search algorithms for automatic deduction of geometrical data and topological relationships for each building element type. Using graph search algorithms, such as Depth First Search (DFS) and topological sortings, all possible construction sequences can be generated and compared against production and construction rules to generate an optimized construction sequence and its associated schedule. The model is implemented in a C# platform.

Keywords: building information modeling (BIM), elemental graph data model (EGDM), geometric and topological data models, graph theory

Procedia PDF Downloads 361
24737 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance

Authors: Rajinder Singh, Ram Valluru

Abstract:

Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.

Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility

Procedia PDF Downloads 114