Search results for: transferability of models
5695 Maintaining the Tension between the Classic Seduction Theory and the Role of Unconscious Fantasies
Authors: Galit Harel
Abstract:
This article describes the long-term psychoanalytic psychotherapy of a young woman who had experienced trauma during her childhood. The details of the trauma were unknown, as all memory of the trauma had been repressed. Past trauma is analyzable through a prism of transference, dreaming and dreams, mental states, and thinking processes that offer an opportunity to explore and analyze the influence of both reality and fantasy on the patient. The presented case describes a therapeutic process that strives to discover hidden meanings through the unconscious system and illustrates the movement from unconscious to conscious during exploration of the patient’s personal trauma in treatment. The author discusses the importance of classical and contemporary psychoanalytic models of childhood sexual trauma through the discovery of manifest and latent content, unconscious fantasies, and actual events of trauma. It is suggested that the complexity of trauma is clarified by the tension between these models and by the inclusion of aspects of both of them for a complete understanding.Keywords: dreams, psychoanalytic psychotherapy, thinking processes, transference, trauma
Procedia PDF Downloads 915694 Quality of the Ruin Probabilities Approximation Using the Regenerative Processes Approach regarding to Large Claims
Authors: Safia Hocine, Djamil Aïssani
Abstract:
Risk models, recently studied in the literature, are becoming increasingly complex. It is rare to find explicit analytical relations to calculate the ruin probability. Indeed, the stability issue occurs naturally in ruin theory, when parameters in risk cannot be estimated than with uncertainty. However, in most cases, there are no explicit formulas for the ruin probability. Hence, the interest to obtain explicit stability bounds for these probabilities in different risk models. In this paper, we interest to the stability bounds of the univariate classical risk model established using the regenerative processes approach. By adopting an algorithmic approach, we implement this approximation and determine numerically the bounds of ruin probability in the case of large claims (heavy-tailed distribution).Keywords: heavy-tailed distribution, large claims, regenerative process, risk model, ruin probability, stability
Procedia PDF Downloads 3645693 Message Passing Neural Network (MPNN) Approach to Multiphase Diffusion in Reservoirs for Well Interconnection Assessments
Authors: Margarita Mayoral-Villa, J. Klapp, L. Di G. Sigalotti, J. E. V. Guzmán
Abstract:
Automated learning techniques are widely applied in the energy sector to address challenging problems from a practical point of view. To this end, we discuss the implementation of a Message Passing algorithm (MPNN)within a Graph Neural Network(GNN)to leverage the neighborhood of a set of nodes during the aggregation process. This approach enables the characterization of multiphase diffusion processes in the reservoir, such that the flow paths underlying the interconnections between multiple wells may be inferred from previously available data on flow rates and bottomhole pressures. The results thus obtained compare favorably with the predictions produced by the Reduced Order Capacitance-Resistance Models (CRM) and suggest the potential of MPNNs to enhance the robustness of the forecasts while improving the computational efficiency.Keywords: multiphase diffusion, message passing neural network, well interconnection, interwell connectivity, graph neural network, capacitance-resistance models
Procedia PDF Downloads 1495692 Daily Stress, Family Functioning, and Mental Health among Palestinian Couples in Israel During COVID-19: A Moderated Mediation Model
Authors: Niveen M. Hassan-Abbas
Abstract:
The COVID-19 pandemic created a range of stressors, among them difficulties related to work conditions, financial changes, lack of childcare, and confinement or isolation due to social distancing. Among families and married individuals, these stressors were often expressed in additional daily hassles, with an influence on mental health. This study examined two moderated mediation models based on Bodenmann’s systemic-transactional stress model. Specifically, the models tested the hypothesis that intra-dyadic stress mediates the association between extra-dyadic stress and mental health, while two measures of family functioning, cohesion, and flexibility, moderate the relationship between extra and intra-dyadic stress. Participants were 480 heterosexual married Palestinians from Israel who completed self-report questionnaires. The results showed partial mediation patterns supporting both models, indicating that family cohesion and flexibility weakened the mediating effect of intra-dyadic stress on the relationship between extra-dyadic stress and mental health. These findings increase our understanding of the variables that affected mental health during the pandemic and suggested that when faced with extra-dyadic stress, married individuals with good family environments are less likely to experience high levels of intra-dyadic stress, which is in turn associated with preserved mental health. Limitations and implications for planning interventions for couples and families during the pandemic are discussed.Keywords: Palestinian families in Israel, COVID-19 pandemic, family cohesion and flexibility, extra-dyadic stress, intra-dyadic stress, mental health
Procedia PDF Downloads 945691 Modeling the Cyclic Behavior of High Damping Rubber Bearings
Authors: Donatello Cardone
Abstract:
Bilinear hysteresis models are usually used to describe the cyclic behavior of high damping rubber bearings. However, they neglect a number of phenomena (such as the interaction between axial load and shear force, buckling and post-buckling behavior, cavitation, scragging effects, etc.) that can significantly influence the dynamic behavior of such isolation devices. In this work, an advanced hysteresis model is examined and properly calibrated using consolidated procedures. Results of preliminary numerical analyses, performed in OpenSees, are shown and compared with the results of experimental tests on high damping rubber bearings and simulation analyses using alternative nonlinear models. The findings of this study can provide an useful tool for the accurate evaluation of the seismic response of structures with rubber-based isolation systems.Keywords: seismic isolation, high damping rubber bearings, numerical modeling, axial-shear force interaction
Procedia PDF Downloads 1245690 A Decision Support Framework for Introducing Business Intelligence to Midlands Based SMEs
Authors: Amritpal Slaich, Mark Elshaw
Abstract:
This paper explores the development of a decision support framework for the introduction of business intelligence (BI) through operational research techniques for application by SMEs. Aligned with the goals of the new Midlands Enterprise Initiative of improving the skill levels of the Midlands workforce and addressing high levels of regional unemployment, we have developed a framework to increase the level of business intelligence used by SMEs to improve business decision-making. Many SMEs in the Midlands fail due to the lack of high quality decision making. Our framework outlines how universities can: engage with SMEs in the use of BI through operational research techniques; develop appropriate and easy to use Excel spreadsheet models; and make use of a process to allow SMEs to feedback their findings of the models. Future work will determine how well the framework performs in getting SMEs to apply BI to improve their decision-making performance.Keywords: SMEs, decision support framework, business intelligence, operational research techniques
Procedia PDF Downloads 4725689 Planning a Supply Chain with Risk and Environmental Objectives
Authors: Ghanima Al-Sharrah, Haitham M. Lababidi, Yusuf I. Ali
Abstract:
The main objective of the current work is to introduce sustainability factors in optimizing the supply chain model for process industries. The supply chain models are normally based on purely economic considerations related to costs and profits. To account for sustainability, two additional factors have been introduced; environment and risk. A supply chain for an entire petroleum organization has been considered for implementing and testing the proposed optimization models. The environmental and risk factors were introduced as indicators reflecting the anticipated impact of the optimal production scenarios on sustainability. The aggregation method used in extending the single objective function to multi-objective function is proven to be quite effective in balancing the contribution of each objective term. The results indicate that introducing sustainability factor would slightly reduce the economic benefit while improving the environmental and risk reduction performances of the process industries.Keywords: environmental indicators, optimization, risk, supply chain
Procedia PDF Downloads 3515688 Multidimensional Sports Spectators Segmentation and Social Media Marketing
Authors: B. Schmid, C. Kexel, E. Djafarova
Abstract:
Understanding consumers is elementary for practitioners in marketing. Consumers of sports events, the sports spectators, are a particularly complex consumer crowd. In order to identify and define their profiles different segmentation approaches can be found in literature, one of them being multidimensional segmentation. Multidimensional segmentation models correspond to the broad range of attitudes, behaviours, motivations and beliefs of sports spectators, other than earlier models. Moreover, in sports there are some well-researched disciplines (e.g. football or North American sports) where consumer profiles and marketing strategies are elaborate and others where no research at all can be found. For example, there is almost no research on athletics spectators. This paper explores the current state of research on sports spectators segmentation. An in-depth literature review provides the framework for a spectators segmentation in athletics. On this basis, additional potential consumer groups and implications for social media marketing will be explored. The findings are the basis for further research.Keywords: multidimensional segmentation, social media, sports marketing, sports spectators segmentation
Procedia PDF Downloads 3075687 Gene Names Identity Recognition Using Siamese Network for Biomedical Publications
Authors: Micheal Olaolu Arowolo, Muhammad Azam, Fei He, Mihail Popescu, Dong Xu
Abstract:
As the quantity of biological articles rises, so does the number of biological route figures. Each route figure shows gene names and relationships. Annotating pathway diagrams manually is time-consuming. Advanced image understanding models could speed up curation, but they must be more precise. There is rich information in biological pathway figures. The first step to performing image understanding of these figures is to recognize gene names automatically. Classical optical character recognition methods have been employed for gene name recognition, but they are not optimized for literature mining data. This study devised a method to recognize an image bounding box of gene name as a photo using deep Siamese neural network models to outperform the existing methods using ResNet, DenseNet and Inception architectures, the results obtained about 84% accuracy.Keywords: biological pathway, gene identification, object detection, Siamese network
Procedia PDF Downloads 2925686 A 7 Dimensional-Quantitative Structure-Activity Relationship Approach Combining Quantum Mechanics Based Grid and Solvation Models to Predict Hotspots and Kinetic Properties of Mutated Enzymes: An Enzyme Engineering Perspective
Authors: R. Pravin Kumar, L. Roopa
Abstract:
Enzymes are molecular machines used in various industries such as pharmaceuticals, cosmetics, food and animal feed, paper and leather processing, biofuel, and etc. Nevertheless, this has been possible only by the breath-taking efforts of the chemists and biologists to evolve/engineer these mysterious biomolecules to work the needful. Main agenda of this enzyme engineering project is to derive screening and selection tools to obtain focused libraries of enzyme variants with desired qualities. The methodologies for this research include the well-established directed evolution, rational redesign and relatively less established yet much faster and accurate insilico methods. This concept was initiated as a Receptor Rependent-4Dimensional Quantitative Structure Activity Relationship (RD-4D-QSAR) to predict kinetic properties of enzymes and extended here to study transaminase by a 7D QSAR approach. Induced-fit scenarios were explored using Quantum Mechanics/Molecular Mechanics (QM/MM) simulations which were then placed in a grid that stores interactions energies derived from QM parameters (QMgrid). In this study, the mutated enzymes were immersed completely inside the QMgrid and this was combined with solvation models to predict descriptors. After statistical screening of descriptors, QSAR models showed > 90% specificity and > 85% sensitivity towards the experimental activity. Mapping descriptors on the enzyme structure revealed hotspots important to enhance the enantioselectivity of the enzyme.Keywords: QMgrid, QM/MM simulations, RD-4D-QSAR, transaminase
Procedia PDF Downloads 1375685 The Use of Thermal Infrared Wavelengths to Determine the Volcanic Soils
Authors: Levent Basayigit, Mert Dedeoglu, Fadime Ozogul
Abstract:
In this study, an application was carried out to determine the Volcanic Soils by using remote sensing. The study area was located on the Golcuk formation in Isparta-Turkey. The thermal bands of Landsat 7 image were used for processing. The implementation of the climate model that was based on the water index was used in ERDAS Imagine software together with pixel based image classification. Soil Moisture Index (SMI) was modeled by using the surface temperature (Ts) which was obtained from thermal bands and vegetation index (NDVI) derived from Landsat 7. Surface moisture values were grouped and classified by using scoring system. Thematic layers were compared together with the field studies. Consequently, different moisture levels for volcanic soils were indicator for determination and separation. Those thermal wavelengths are preferable bands for separation of volcanic soils using moisture and temperature models.Keywords: Landsat 7, soil moisture index, temperature models, volcanic soils
Procedia PDF Downloads 3065684 Spatial Econometric Approaches for Count Data: An Overview and New Directions
Authors: Paula Simões, Isabel Natário
Abstract:
This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data
Procedia PDF Downloads 5935683 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 3925682 Building a Blockchain-based Internet of Things
Authors: Rob van den Dam
Abstract:
Today’s Internet of Things (IoT) comprises more than a billion intelligent devices, connected via wired/wireless communications. The expected proliferation of hundreds of billions more places us at the threshold of a transformation sweeping across the communications industry. Yet, we found that the IoT architecture and solutions that currently work for billions of devices won’t necessarily scale to tomorrow’s hundreds of billions of devices because of high cost, lack of privacy, not future-proof, lack of functional value and broken business models. As the IoT scales exponentially, decentralized networks have the potential to reduce infrastructure and maintenance costs to manufacturers. Decentralization also promises increased robustness by removing single points of failure that could exist in traditional centralized networks. By shifting the power in the network from the center to the edges, devices gain greater autonomy and can become points of transactions and economic value creation for owners and users. To validate the underlying technology vision, IBM jointly developed with Samsung Electronics the autonomous decentralized peer-to- peer proof-of-concept (PoC). The primary objective of this PoC was to establish a foundation on which to demonstrate several capabilities that are fundamental to building a decentralized IoT. Though many commercial systems in the future will exist as hybrid centralized-decentralized models, the PoC demonstrated a fully distributed proof. The PoC (a) validated the future vision for decentralized systems to extensively augment today’s centralized solutions, (b) demonstrated foundational IoT tasks without the use of centralized control, (c) proved that empowered devices can engage autonomously in marketplace transactions. The PoC opens the door for the communications and electronics industry to further explore the challenges and opportunities of potential hybrid models that can address the complexity and variety of requirements posed by the internet that continues to scale. Contents: (a) The new approach for an IoT that will be secure and scalable, (b) The three foundational technologies that are key for the future IoT, (c) The related business models and user experiences, (d) How such an IoT will create an 'Economy of Things', (e) The role of users, devices, and industries in the IoT future, (f) The winners in the IoT economy.Keywords: IoT, internet, wired, wireless
Procedia PDF Downloads 3365681 Forecasting Container Throughput: Using Aggregate or Terminal-Specific Data?
Authors: Gu Pang, Bartosz Gebka
Abstract:
We forecast the demand of total container throughput at the Indonesia’s largest seaport, Tanjung Priok Port. We propose four univariate forecasting models, including SARIMA, the additive Seasonal Holt-Winters, the multiplicative Seasonal Holt-Winters and the Vector Error Correction Model. Our aim is to provide insights into whether forecasting the total container throughput obtained by historical aggregated port throughput time series is superior to the forecasts of the total throughput obtained by summing up the best individual terminal forecasts. We test the monthly port/individual terminal container throughput time series between 2003 and 2013. The performance of forecasting models is evaluated based on Mean Absolute Error and Root Mean Squared Error. Our results show that the multiplicative Seasonal Holt-Winters model produces the most accurate forecasts of total container throughput, whereas SARIMA generates the worst in-sample model fit. The Vector Error Correction Model provides the best model fits and forecasts for individual terminals. Our results report that the total container throughput forecasts based on modelling the total throughput time series are consistently better than those obtained by combining those forecasts generated by terminal-specific models. The forecasts of total throughput until the end of 2018 provide an essential insight into the strategic decision-making on the expansion of port's capacity and construction of new container terminals at Tanjung Priok Port.Keywords: SARIMA, Seasonal Holt-Winters, Vector Error Correction Model, container throughput
Procedia PDF Downloads 5045680 Predicting Resistance of Commonly Used Antimicrobials in Urinary Tract Infections: A Decision Tree Analysis
Authors: Meera Tandan, Mohan Timilsina, Martin Cormican, Akke Vellinga
Abstract:
Background: In general practice, many infections are treated empirically without microbiological confirmation. Understanding susceptibility of antimicrobials during empirical prescribing can be helpful to reduce inappropriate prescribing. This study aims to apply a prediction model using a decision tree approach to predict the antimicrobial resistance (AMR) of urinary tract infections (UTI) based on non-clinical features of patients over 65 years. Decision tree models are a novel idea to predict the outcome of AMR at an initial stage. Method: Data was extracted from the database of the microbiological laboratory of the University Hospitals Galway on all antimicrobial susceptibility testing (AST) of urine specimens from patients over the age of 65 from January 2011 to December 2014. The primary endpoint was resistance to common antimicrobials (Nitrofurantoin, trimethoprim, ciprofloxacin, co-amoxiclav and amoxicillin) used to treat UTI. A classification and regression tree (CART) model was generated with the outcome ‘resistant infection’. The importance of each predictor (the number of previous samples, age, gender, location (nursing home, hospital, community) and causative agent) on antimicrobial resistance was estimated. Sensitivity, specificity, negative predictive (NPV) and positive predictive (PPV) values were used to evaluate the performance of the model. Seventy-five percent (75%) of the data were used as a training set and validation of the model was performed with the remaining 25% of the dataset. Results: A total of 9805 UTI patients over 65 years had their urine sample submitted for AST at least once over the four years. E.coli, Klebsiella, Proteus species were the most commonly identified pathogens among the UTI patients without catheter whereas Sertia, Staphylococcus aureus; Enterobacter was common with the catheter. The validated CART model shows slight differences in the sensitivity, specificity, PPV and NPV in between the models with and without the causative organisms. The sensitivity, specificity, PPV and NPV for the model with non-clinical predictors was between 74% and 88% depending on the antimicrobial. Conclusion: The CART models developed using non-clinical predictors have good performance when predicting antimicrobial resistance. These models predict which antimicrobial may be the most appropriate based on non-clinical factors. Other CART models, prospective data collection and validation and an increasing number of non-clinical factors will improve model performance. The presented model provides an alternative approach to decision making on antimicrobial prescribing for UTIs in older patients.Keywords: antimicrobial resistance, urinary tract infection, prediction, decision tree
Procedia PDF Downloads 2555679 Variability Management of Contextual Feature Model in Multi-Software Product Line
Authors: Muhammad Fezan Afzal, Asad Abbas, Imran Khan, Salma Imtiaz
Abstract:
Software Product Line (SPL) paradigm is used for the development of the family of software products that share common and variable features. Feature model is a domain of SPL that consists of common and variable features with predefined relationships and constraints. Multiple SPLs consist of a number of similar common and variable features, such as mobile phones and Tabs. Reusability of common and variable features from the different domains of SPL is a complex task due to the external relationships and constraints of features in the feature model. To increase the reusability of feature model resources from domain engineering, it is required to manage the commonality of features at the level of SPL application development. In this research, we have proposed an approach that combines multiple SPLs into a single domain and converts them to a common feature model. Extracting the common features from different feature models is more effective, less cost and time to market for the application development. For extracting features from multiple SPLs, the proposed framework consists of three steps: 1) find the variation points, 2) find the constraints, and 3) combine the feature models into a single feature model on the basis of variation points and constraints. By using this approach, reusability can increase features from the multiple feature models. The impact of this research is to reduce the development of cost, time to market and increase products of SPL.Keywords: software product line, feature model, variability management, multi-SPLs
Procedia PDF Downloads 695678 Modeling Football Penalty Shootouts: How Improving Individual Performance Affects Team Performance and the Fairness of the ABAB Sequence
Authors: Pablo Enrique Sartor Del Giudice
Abstract:
Penalty shootouts often decide the outcome of important soccer matches. Although usually referred to as ”lotteries”, there is evidence that some national teams and clubs consistently perform better than others. The outcomes are therefore not explained just by mere luck, and therefore there are ways to improve the average performance of players, naturally at the expense of some sort of effort. In this article we study the payoff of player performance improvements in terms of the performance of the team as a whole. To do so we develop an analytical model with static individual performances, as well as Monte Carlo models that take into account the known influence of partial score and round number on individual performances. We find that within a range of usual values, the team performance improves above 70% faster than individual performances do. Using these models, we also estimate that the new ABBA penalty shootout ordering under test reduces almost all the known bias in favor of the first-shooting team under the current ABAB system.Keywords: football, penalty shootouts, Montecarlo simulation, ABBA
Procedia PDF Downloads 1625677 The Practise of Hand Drawing as a Premier Form of Representation in Architectural Design Teaching: The Case of FAUP
Authors: Rafael Santos, Clara Pimenta Do Vale, Barbara Bogoni, Poul Henning Kirkegaard
Abstract:
In the last decades, the relevance of hand drawing has decreased in the scope of architectural education. However, some schools continue to recognize its decisive role, not only in the architectural design teaching, but in the whole of architectural training. With this paper it is intended to present the results of a research developed on the following problem: the practise of hand drawing as a premier form of representation in architectural design teaching. The research had as its object the educational model of the Faculty of Architecture of the University of Porto (FAUP) and was led by three main objectives: to identify the circumstance that promoted hand drawing as a form of representation in FAUP's model; to characterize the types of hand drawing and their role in that model; to determine the particularities of hand drawing as a premier form of representation in architectural design teaching. Methodologically, the research was conducted according to a qualitative embedded single-case study design. The object – i.e., the educational model – was approached in FAUP case considering its Context and three embedded unities of analysis: the educational Purposes, Principles and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is Assumed; the architectural design classes, expressing how the model is Achieved; and the students, expressing how the model is Acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal that the educational model of FAUP – following the model of the former Porto School – was largely due to the methodological foundations created with the hand drawing teaching-learning processes. In the absence of a culture of explicit theoretical elaboration or systematic research, hand drawing was the support for the continuity of the school, an expression of a unified thought about what should be the reflection and practice of architecture. As a form of representation, hand drawing plays a transversal role in the entire educational model, since its purposes are not limited to the conception of architectural design – it is also a means for perception, analysis and synthesis. Regarding the architectural design teaching, there seems to be an understanding of three complementary dimensions of didactics: the instrumental, methodological and propositional dimension. At FAUP, hand drawing is recognized as the common denominator among these dimensions, according to the idea of "globality of drawing". It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance and valorisation of FAUP’s model; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the hand drawing in architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.Keywords: architectural design teaching, architectural education, forms of representation, hand drawing
Procedia PDF Downloads 1315676 Revolving Ferrofluid Flow in Porous Medium with Rotating Disk
Authors: Paras Ram, Vikas Kumar
Abstract:
The transmission of Malaria with seasonal were studied through the use of mathematical models. The data from the annual number of Malaria cases reported to the Division of Epidemiology, Ministry of Public Health, Thailand during the period 1997-2011 were analyzed. The transmission of Malaria with seasonal was studied by formulating a mathematical model which had been modified to describe different situations encountered in the transmission of Malaria. In our model, the population was separated into two groups: the human and vector groups, and then constructed a system of nonlinear differential equations. Each human group was divided into susceptible, infectious in hot season, infectious in rainy season, infectious in cool season and recovered classes. The vector population was separated into two classes only: susceptible and infectious vectors. The analysis of the models was given by the standard dynamical modeling.Keywords: ferrofluid, magnetic field, porous medium, rotating disk, Neuringer-Rosensweig Model
Procedia PDF Downloads 4215675 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 1705674 Emancipation through the Inclusion of Civil Society in Contemporary Peacebuilding: A Case Study of Peacebuilding Efforts in Colombia
Authors: D. Romero Espitia
Abstract:
Research on peacebuilding has taken a critical turn into examining the neoliberal and hegemonic conception of peace operations. Alternative peacebuilding models have been analyzed, but the scholarly discussion fails to bring them together or form connections between them. The objective of this paper is to rethink peacebuilding by extracting the positive aspects of the various peacebuilding models, connecting them with the local context, and therefore promote emancipation in contemporary peacebuilding efforts. Moreover, local ownership has been widely labelled as one, if not the core principle necessary for a successful peacebuilding project. Yet, definitions of what constitutes the 'local' remain debated. Through a qualitative review of literature, this paper unpacks the contemporary conception of peacebuilding in nexus with 'local ownership' as manifested through civil society. Using Colombia as a case study, this paper argues that a new peacebuilding framework, one that reconsiders the terms of engagement between international and national actors, is needed in order to foster effective peacebuilding efforts in contested transitional states.Keywords: civil society, Colombia, emancipation, peacebuilding
Procedia PDF Downloads 1345673 Modeling Waiting and Service Time for Patients: A Case Study of Matawale Health Centre, Zomba, Malawi
Authors: Moses Aron, Elias Mwakilama, Jimmy Namangale
Abstract:
Spending more time on long queues for a basic service remains a common challenge to most developing countries, including Malawi. For health sector in particular, Out-Patient Department (OPD) experiences long queues. This puts the lives of patients at risk. However, using queuing analysis to under the nature of the problems and efficiency of service systems, such problems can be abated. Based on a kind of service, literature proposes different possible queuing models. However, unlike using generalized assumed models proposed by literature, use of real time case study data can help in deeper understanding the particular problem model and how such a model can vary from one day to the other and also from each case to another. As such, this study uses data obtained from one urban HC for BP, Pediatric and General OPD cases to investigate an average queuing time for patients within the system. It seeks to highlight the proper queuing model by investigating the kind of distributions functions over patient’s arrival time, inter-arrival time, waiting time and service time. Comparable with the standard set values by WHO, the study found that patients at this HC spend more waiting times than service times. On model investigation, different days presented different models ranging from an assumed M/M/1, M/M/2 to M/Er/2. As such, through sensitivity analysis, in general, a commonly assumed M/M/1 model failed to fit the data but rather an M/Er/2 demonstrated to fit well. An M/Er/3 model seemed to be good in terms of measuring resource utilization, proposing a need to increase medical personnel at this HC. However, an M/Er/4 showed to cause more idleness of human resources.Keywords: health care, out-patient department, queuing model, sensitivity analysis
Procedia PDF Downloads 4355672 Modelling and Simulation Efforts in Scale-Up and Characterization of Semi-Solid Dosage Forms
Authors: Saurav S. Rath, Birendra K. David
Abstract:
Generic pharmaceutical industry has to operate in strict timelines of product development and scale-up from lab to plant. Hence, detailed product & process understanding and implementation of appropriate mechanistic modelling and Quality-by-design (QbD) approaches are imperative in the product life cycle. This work provides example cases of such efforts in topical dosage products. Topical products are typically in the form of emulsions, gels, thick suspensions or even simple solutions. The efficacy of such products is determined by characteristics like rheology and morphology. Defining, and scaling up the right manufacturing process with a given set of ingredients, to achieve the right product characteristics presents as a challenge to the process engineer. For example, the non-Newtonian rheology varies not only with CPPs and CMAs but also is an implicit function of globule size (CQA). Hence, this calls for various mechanistic models, to help predict the product behaviour. This paper focusses on such models obtained from computational fluid dynamics (CFD) coupled with population balance modelling (PBM) and constitutive models (like shear, energy density). In a special case of the use of high shear homogenisers (HSHs) for the manufacture of thick emulsions/gels, this work presents some findings on (i) scale-up algorithm for HSH using shear strain, a novel scale-up parameter for estimating mixing parameters, (ii) non-linear relationship between viscosity and shear imparted into the system, (iii) effect of hold time on rheology of product. Specific examples of how this approach enabled scale-up across 1L, 10L, 200L, 500L and 1000L scales will be discussed.Keywords: computational fluid dynamics, morphology, quality-by-design, rheology
Procedia PDF Downloads 2695671 Forecasting Stock Indexes Using Bayesian Additive Regression Tree
Authors: Darren Zou
Abstract:
Forecasting the stock market is a very challenging task. Various economic indicators such as GDP, exchange rates, interest rates, and unemployment have a substantial impact on the stock market. Time series models are the traditional methods used to predict stock market changes. In this paper, a machine learning method, Bayesian Additive Regression Tree (BART) is used in predicting stock market indexes based on multiple economic indicators. BART can be used to model heterogeneous treatment effects, and thereby works well when models are misspecified. It also has the capability to handle non-linear main effects and multi-way interactions without much input from financial analysts. In this research, BART is proposed to provide a reliable prediction on day-to-day stock market activities. By comparing the analysis results from BART and with time series method, BART can perform well and has better prediction capability than the traditional methods.Keywords: BART, Bayesian, predict, stock
Procedia PDF Downloads 1305670 Effect of Realistic Lubricant Properties on Thermal Electrohydrodynamic Lubrication Behavior in Circular Contacts
Authors: Puneet Katyal, Punit Kumar
Abstract:
A great deal of efforts has been done in the field of thermal effects in electrohydrodynamic lubrication (TEHL) during the last five decades. The focus was primarily on the development of an efficient numerical scheme to deal with the computational challenges involved in the solution of TEHL model; however, some important aspects related to the accurate description of lubricant properties such as viscosity, rheology and thermal conductivity in EHL point contact analysis remain largely neglected. A few studies available in this regard are based upon highly complex mathematical models difficult to formulate and execute. Using a simplified thermal EHL model for point contacts, this work sheds some light on the importance of accurate characterization of the lubricant properties and demonstrates that the computed TEHL characteristics are highly sensitive to lubricant properties. It also emphasizes the use of appropriate mathematical models with experimentally determined parameters to account for correct lubricant behaviour.Keywords: TEHL, shear thinning, rheology, conductivity
Procedia PDF Downloads 2005669 Orthogonal Metal Cutting Simulation of Steel AISI 1045 via Smoothed Particle Hydrodynamic Method
Authors: Seyed Hamed Hashemi Sohi, Gerald Jo Denoga
Abstract:
Machining or metal cutting is one of the most widely used production processes in industry. The quality of the process and the resulting machined product depends on parameters like tool geometry, material, and cutting conditions. However, the relationships of these parameters to the cutting process are often based mostly on empirical knowledge. In this study, computer modeling and simulation using LS-DYNA software and a Smoothed Particle Hydrodynamic (SPH) methodology, was performed on the orthogonal metal cutting process to analyze three-dimensional deformation of AISI 1045 medium carbon steel during machining. The simulation was performed using the following constitutive models: the Power Law model, the Johnson-Cook model, and the Zerilli-Armstrong models (Z-A). The outcomes were compared against the simulated results obtained by Cenk Kiliçaslan using the Finite Element Method (FEM) and the empirical results of Jaspers and Filice. The analysis shows that the SPH method combined with the Zerilli-Armstrong constitutive model is a viable alternative to simulating the metal cutting process. The tangential force was overestimated by 7%, and the normal force was underestimated by 16% when compared with empirical values. The simulation values for flow stress versus strain at various temperatures were also validated against empirical values. The SPH method using the Z-A model has also proven to be robust against issues of time-scaling. Experimental work was also done to investigate the effects of friction, rake angle and tool tip radius on the simulation.Keywords: metal cutting, smoothed particle hydrodynamics, constitutive models, experimental, cutting forces analyses
Procedia PDF Downloads 2615668 Short Life Cycle Time Series Forecasting
Authors: Shalaka Kadam, Dinesh Apte, Sagar Mainkar
Abstract:
The life cycle of products is becoming shorter and shorter due to increased competition in market, shorter product development time and increased product diversity. Short life cycles are normal in retail industry, style business, entertainment media, and telecom and semiconductor industry. The subject of accurate forecasting for demand of short lifecycle products is of special enthusiasm for many researchers and organizations. Due to short life cycle of products the amount of historical data that is available for forecasting is very minimal or even absent when new or modified products are launched in market. The companies dealing with such products want to increase the accuracy in demand forecasting so that they can utilize the full potential of the market at the same time do not oversupply. This provides the challenge to develop a forecasting model that can forecast accurately while handling large variations in data and consider the complex relationships between various parameters of data. Many statistical models have been proposed in literature for forecasting time series data. Traditional time series forecasting models do not work well for short life cycles due to lack of historical data. Also artificial neural networks (ANN) models are very time consuming to perform forecasting. We have studied the existing models that are used for forecasting and their limitations. This work proposes an effective and powerful forecasting approach for short life cycle time series forecasting. We have proposed an approach which takes into consideration different scenarios related to data availability for short lifecycle products. We then suggest a methodology which combines statistical analysis with structured judgement. Also the defined approach can be applied across domains. We then describe the method of creating a profile from analogous products. This profile can then be used for forecasting products with historical data of analogous products. We have designed an application which combines data, analytics and domain knowledge using point-and-click technology. The forecasting results generated are compared using MAPE, MSE and RMSE error scores. Conclusion: Based on the results it is observed that no one approach is sufficient for short life-cycle forecasting and we need to combine two or more approaches for achieving the desired accuracy.Keywords: forecast, short life cycle product, structured judgement, time series
Procedia PDF Downloads 3585667 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis
Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen
Abstract:
The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision
Procedia PDF Downloads 1265666 Life Prediction Method of Lithium-Ion Battery Based on Grey Support Vector Machines
Authors: Xiaogang Li, Jieqiong Miao
Abstract:
As for the problem of the grey forecasting model prediction accuracy is low, an improved grey prediction model is put forward. Firstly, use trigonometric function transform the original data sequence in order to improve the smoothness of data , this model called SGM( smoothness of grey prediction model), then combine the improved grey model with support vector machine , and put forward the grey support vector machine model (SGM - SVM).Before the establishment of the model, we use trigonometric functions and accumulation generation operation preprocessing data in order to enhance the smoothness of the data and weaken the randomness of the data, then use support vector machine (SVM) to establish a prediction model for pre-processed data and select model parameters using genetic algorithms to obtain the optimum value of the global search. Finally, restore data through the "regressive generate" operation to get forecasting data. In order to prove that the SGM-SVM model is superior to other models, we select the battery life data from calce. The presented model is used to predict life of battery and the predicted result was compared with that of grey model and support vector machines.For a more intuitive comparison of the three models, this paper presents root mean square error of this three different models .The results show that the effect of grey support vector machine (SGM-SVM) to predict life is optimal, and the root mean square error is only 3.18%. Keywords: grey forecasting model, trigonometric function, support vector machine, genetic algorithms, root mean square errorKeywords: Grey prediction model, trigonometric functions, support vector machines, genetic algorithms, root mean square error
Procedia PDF Downloads 461