Search results for: account hijacking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2700

Search results for: account hijacking

2550 Understanding Natural Resources Governance in Canada: The Role of Institutions, Interests, and Ideas in Alberta's Oil Sands Policy

Authors: Justine Salam

Abstract:

As a federal state, Canada’s constitutional arrangements regarding the management of natural resources is unique because it gives complete ownership and control of natural resources to the provinces (subnational level). However, the province of Alberta—home to the third largest oil reserves in the world—lags behind comparable jurisdictions in levying royalties on oil corporations, especially oil sands royalties. While Albertans own the oil sands, scholars have argued that natural resource exploitation in Alberta benefits corporations and industry more than it does Albertans. This study provides a systematic understanding of the causal factors affecting royalties in Alberta to map dynamics of power and how they manifest themselves during policy-making. Mounting domestic and global public pressure led Alberta to review its oil sands royalties twice in less than a decade through public-commissioned Royalty Review Panels, first in 2007 and again in 2015. The Panels’ task was to research best practices and to provide policy recommendations to the Government through public consultations with Albertans, industry, non-governmental organizations, and First Nations peoples. Both times, the Panels recommended a relative increase to oil sands royalties. However, irrespective of the Reviews’ recommendations, neither the right-wing 2007 Progressive Conservative Party (PC) nor the left-wing 2015 New Democratic Party (NDP) government—both committed to increase oil sands royalties—increased royalty intake. Why did two consecutive political parties at opposite ends of the political spectrum fail to account for the recommendations put forward by the Panel? Through a qualitative case-study analysis, this study assesses domestic and global causal factors for Alberta’s inability to raise oil sands royalties significantly after the two Reviews through an institutions, interests, and ideas framework. Indeed, causal factors can be global (e.g. market and price fluctuation) or domestic (e.g. oil companies’ influence on the Alberta government). The institutions, interests, and ideas framework is at the intersection of public policy, comparative studies, and political economy literatures, and therefore draws multi-faceted insights into the analysis. To account for institutions, the study proposes to review international trade agreements documents such as the North American Free Trade Agreement (NAFTA) because they have embedded Alberta’s oil sands into American energy security policy and tied Canadian and Albertan oil policy in legal international nods. To account for interests, such as how the oil lobby or the environment lobby can penetrate governmental decision-making spheres, the study draws on the Oil Sands Oral History project, a database of interviews from government officials and oil industry leaders at a pivotal time in Alberta’s oil industry, 2011-2013. Finally, to account for ideas, such as how narratives of Canada as a global ‘energy superpower’ and the importance of ‘energy security’ have dominated and polarized public discourse, the study relies on content analysis of Alberta-based pro-industry newspapers to trace the prevalence of these narratives. By mapping systematically the nods and dynamics of power at play in Alberta, the study sheds light on the factors that influence royalty policy-making in one of the largest industries in Canada.

Keywords: Alberta Canada, natural resources governance, oil sands, political economy

Procedia PDF Downloads 102
2549 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: gendered grammar, misogynistic language, natural language processing, neural networks

Procedia PDF Downloads 92
2548 Technological Development of a Biostimulant Bioproduct for Fruit Seedlings: An Engineering Overview

Authors: Andres Diaz Garcia

Abstract:

The successful technological development of any bioproduct, including those of the biostimulant type, requires to adequately completion of a series of stages allied to different disciplines that are related to microbiological, engineering, pharmaceutical chemistry, legal and market components, among others. Engineering as a discipline has a key contribution in different aspects of fermentation processes such as the design and optimization of culture media, the standardization of operating conditions within the bioreactor and the scaling of the production process of the active ingredient that it will be used in unit operations downstream. However, all aspects mentioned must take into account many biological factors of the microorganism such as the growth rate, the level of assimilation to various organic and inorganic sources and the mechanisms of action associated with its biological activity. This paper focuses on the practical experience within the Colombian Corporation for Agricultural Research (Agrosavia), which led to the development of a biostimulant bioproduct based on native rhizobacteria Bacillus amyloliquefaciens, oriented mainly to plant growth promotion in cape gooseberry nurseries and fruit crops in Colombia, and the challenges that were overcome from the expertise in the area of engineering. Through the application of strategies and engineering tools, a culture medium was optimized to obtain concentrations higher than 1E09 CFU (colony form units)/ml in liquid fermentation, the process of biomass production was standardized and a scale-up strategy was generated based on geometric (H/D of bioreactor relationships), and operational criteria based on a minimum dissolved oxygen concentration and that took into account the differences in the capacity of control of the process in the laboratory and pilot scales. Currently, the bioproduct obtained through this technological process is in stages of registration in Colombia for cape gooseberry fruits for export.

Keywords: biochemical engineering, liquid fermentation, plant growth promoting, scale-up process

Procedia PDF Downloads 85
2547 Arabic Light Word Analyser: Roles with Deep Learning Approach

Authors: Mohammed Abu Shquier

Abstract:

This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.

Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN

Procedia PDF Downloads 0
2546 Forecast Financial Bubbles: Multidimensional Phenomenon

Authors: Zouari Ezzeddine, Ghraieb Ikram

Abstract:

From the results of the academic literature which evokes the limitations of previous studies, this article shows the reasons for multidimensionality Prediction of financial bubbles. A new framework for modeling study predicting financial bubbles by linking a set of variable presented on several dimensions dictating its multidimensional character. It takes into account the preferences of financial actors. A multicriteria anticipation of the appearance of bubbles in international financial markets helps to fight against a possible crisis.

Keywords: classical measures, predictions, financial bubbles, multidimensional, artificial neural networks

Procedia PDF Downloads 543
2545 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 46
2544 Differential Approach to Technology Aided English Language Teaching: A Case Study in a Multilingual Setting

Authors: Sweta Sinha

Abstract:

Rapid evolution of technology has changed language pedagogy as well as perspectives on language use, leading to strategic changes in discourse studies. We are now firmly embedded in a time when digital technologies have become an integral part of our daily lives. This has led to generalized approaches to English Language Teaching (ELT) which has raised two-pronged concerns in linguistically diverse settings: a) the diverse linguistic background of the learner might interfere/ intervene with the learning process and b) the differential level of already acquired knowledge of target language might make the classroom practices too easy or too difficult for the target group of learners. ELT needs a more systematic and differential pedagogical approach for greater efficiency and accuracy. The present research analyses the need of identifying learner groups based on different levels of target language proficiency based on a longitudinal study done on 150 undergraduate students. The learners were divided into five groups based on their performance on a twenty point scale in Listening Speaking Reading and Writing (LSRW). The groups were then subjected to varying durations of technology aided language learning sessions and their performance was recorded again on the same scale. Identifying groups and introducing differential teaching and learning strategies led to better results compared to generalized teaching strategies. Language teaching includes different aspects: the organizational, the technological, the sociological, the psychological, the pedagogical and the linguistic. And a facilitator must account for all these aspects in a carefully devised differential approach meeting the challenge of learner diversity. Apart from the justification of the formation of differential groups the paper attempts to devise framework to account for all these aspects in order to make ELT in multilingual setting much more effective.

Keywords: differential groups, English language teaching, language pedagogy, multilingualism, technology aided language learning

Procedia PDF Downloads 374
2543 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 644
2542 Model for Calculating Traffic Mass and Deceleration Delays Based on Traffic Field Theory

Authors: Liu Canqi, Zeng Junsheng

Abstract:

This study identifies two typical bottlenecks that occur when a vehicle cannot change lanes: car following and car stopping. The ideas of traffic field and traffic mass are presented in this work. When there are other vehicles in front of the target vehicle within a particular distance, a force is created that affects the target vehicle's driving speed. The characteristics of the driver and the vehicle collectively determine the traffic mass; the driving speed of the vehicle and external variables have no bearing on this. From a physical level, this study examines the vehicle's bottleneck when following a car, identifies the outside factors that have an impact on how it drives, takes into account that the vehicle will transform kinetic energy into potential energy during deceleration, and builds a calculation model for traffic mass. The energy-time conversion coefficient is created from an economic standpoint utilizing the social average wage level and the average cost of motor fuel. Vissim simulation program measures the vehicle's deceleration distance and delays under the Wiedemann car-following model. The difference between the measured value of deceleration delay acquired by simulation and the theoretical value calculated by the model is compared using the conversion calculation model of traffic mass and deceleration delay. The experimental data demonstrate that the model is reliable since the error rate between the theoretical calculation value of the deceleration delay obtained by the model and the measured value of simulation results is less than 10%. The article's conclusion is that the traffic field has an impact on moving cars on the road and that physical and socioeconomic factors should be taken into account while studying vehicle-following behavior. The deceleration delay value of a vehicle's driving and traffic mass have a socioeconomic relationship that can be utilized to calculate the energy-time conversion coefficient when dealing with the bottleneck of cars stopping and starting.

Keywords: traffic field, social economics, traffic mass, bottleneck, deceleration delay

Procedia PDF Downloads 36
2541 Geometric Nonlinear Dynamic Analysis of Cylindrical Composite Sandwich Shells Subjected to Underwater Blast Load

Authors: Mustafa Taskin, Ozgur Demir, M. Mert Serveren

Abstract:

The precise study of the impact of underwater explosions on structures is of great importance in the design and engineering calculations of floating structures, especially those used for military purposes, as well as power generation facilities such as offshore platforms that can become a target in case of war. Considering that ship and submarine structures are mostly curved surfaces, it is extremely important and interesting to examine the destructive effects of underwater explosions on curvilinear surfaces. In this study, geometric nonlinear dynamic analysis of cylindrical composite sandwich shells subjected to instantaneous pressure load is performed. The instantaneous pressure load is defined as an underwater explosion and the effects of the liquid medium are taken into account. There are equations in the literature for pressure due to underwater explosions, but these equations have been obtained for flat plates. For this reason, the instantaneous pressure load equations are arranged to be suitable for curvilinear structures before proceeding with the analyses. Fluid-solid interaction is defined by using Taylor's Plate Theory. The lower and upper layers of the cylindrical composite sandwich shell are modeled as composite laminate and the middle layer consists of soft core. The geometric nonlinear dynamic equations of the shell are obtained by Hamilton's principle, taken into account the von Kàrmàn theory of large displacements. Then, time dependent geometric nonlinear equations of motion are solved with the help of generalized differential quadrature method (GDQM) and dynamic behavior of cylindrical composite sandwich shells exposed to underwater explosion is investigated. An algorithm that can work parametrically for the solution has been developed within the scope of the study.

Keywords: cylindrical composite sandwich shells, generalized differential quadrature method, geometric nonlinear dynamic analysis, underwater explosion

Procedia PDF Downloads 164
2540 Shaping and Improving the Human Resource Management in Small and Medium Enterprises in Poland

Authors: Małgorzata Smolarek

Abstract:

One of the barriers to the development of small and medium-sized enterprises (SME) are difficulties connected with management of human resources. The first part of article defines the specifics of staff management in small and medium enterprises. The practical part presents results of own studies in the area of diagnosis of the state of the human resources management in small and medium-sized enterprises in Poland. It takes into account its impact on the functioning of SME in a variable environment. This part presents findings of empirical studies, which enabled verification of the hypotheses and formulation of conclusions. The findings presented in this paper were obtained during the implementation of the project entitled 'Tendencies and challenges in strategic managing SME in Silesian Voivodeship.' The aim of the studies was to diagnose the state of strategic management and human resources management taking into account its impact on the functioning of small and medium enterprises operating in Silesian Voivodeship in Poland and to indicate improvement areas of the model under diagnosis. One of the specific objectives of the studies was to diagnose the state of the process of strategic management of human resources and to identify fundamental problems. In this area, the main hypothesis was formulated: The enterprises analysed do not have comprehensive strategies for management of human resources. The survey was conducted by questionnaire. Main Research Results: Human resource management in SMEs is characterized by simplicity of procedures, and the lack of sophisticated tools and its specificity depends on the size of the company. The process of human resources management in SME has to be adjusted to the structure of an organisation, result from its objectives, so that an organisation can fully implement its strategic plans and achieve success and competitive advantage on the market. A guarantee of success is an accurately developed policy of human resources management based on earlier analyses of the existing procedures and possessed human resources.

Keywords: human resources management, human resources policy, personnel strategy, small and medium enterprises

Procedia PDF Downloads 217
2539 Development of Map of Gridded Basin Flash Flood Potential Index: GBFFPI Map of QuangNam, QuangNgai, DaNang, Hue Provinces

Authors: Le Xuan Cau

Abstract:

Flash flood is occurred in short time rainfall interval: from 1 hour to 12 hours in small and medium basins. Flash floods typically have two characteristics: large water flow and big flow velocity. Flash flood is occurred at hill valley site (strip of lowland of terrain) in a catchment with large enough distribution area, steep basin slope, and heavy rainfall. The risk of flash floods is determined through Gridded Basin Flash Flood Potential Index (GBFFPI). Flash Flood Potential Index (FFPI) is determined through terrain slope flash flood index, soil erosion flash flood index, land cover flash floods index, land use flash flood index, rainfall flash flood index. Determining GBFFPI, each cell in a map can be considered as outlet of a water accumulation basin. GBFFPI of the cell is determined as basin average value of FFPI of the corresponding water accumulation basin. Based on GIS, a tool is developed to compute GBFFPI using ArcObjects SDK for .NET. The maps of GBFFPI are built in two types: GBFFPI including rainfall flash flood index (real time flash flood warning) or GBFFPI excluding rainfall flash flood index. GBFFPI Tool can be used to determine a high flash flood potential site in a large region as quick as possible. The GBFFPI is improved from conventional FFPI. The advantage of GBFFPI is that GBFFPI is taking into account the basin response (interaction of cells) and determines more true flash flood site (strip of lowland of terrain) while conventional FFPI is taking into account single cell and does not consider the interaction between cells. The GBFFPI Map of QuangNam, QuangNgai, DaNang, Hue is built and exported to Google Earth. The obtained map proves scientific basis of GBFFPI.

Keywords: ArcObjects SDK for NET, basin average value of FFPI, gridded basin flash flood potential index, GBFFPI map

Procedia PDF Downloads 347
2538 Method for Assessing Potential in Distribution Logistics

Authors: B. Groß, P. Fronia, P. Nyhuis

Abstract:

In addition to the production, which is already frequently optimized, improving the distribution logistics also opens up tremendous potential for increasing an enterprise’s competitiveness. Here too though, numerous interactions need to be taken into account, enterprises thus need to be able to identify and weigh between different potentials for economically efficient optimizations. In order to be able to assess potentials, enterprises require a suitable method. This paper first briefly presents the need for this research before introducing the procedure that will be used to develop an appropriate method that not only considers interactions but is also quickly and easily implemented.

Keywords: distribution logistics, evaluation of potential, methods, model

Procedia PDF Downloads 476
2537 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures

Authors: A. T. Al-Isawi, P. E. F. Collins

Abstract:

The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.

Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction

Procedia PDF Downloads 99
2536 Statistical Characteristics of Distribution of Radiation-Induced Defects under Random Generation

Authors: P. Selyshchev

Abstract:

We consider fluctuations of defects density taking into account their interaction. Stochastic field of displacement generation rate gives random defect distribution. We determinate statistical characteristics (mean and dispersion) of random field of point defect distribution as function of defect generation parameters, temperature and properties of irradiated crystal.

Keywords: irradiation, primary defects, interaction, fluctuations

Procedia PDF Downloads 302
2535 Reallocation of Bed Capacity in a Hospital Combining Discrete Event Simulation and Integer Linear Programming

Authors: Muhammed Ordu, Eren Demir, Chris Tofallis

Abstract:

The number of inpatient admissions in the UK has been significantly increasing over the past decade. These increases cause bed occupancy rates to exceed the target level (85%) set by the Department of Health in England. Therefore, hospital service managers are struggling to better manage key resource such as beds. On the other hand, this severe demand pressure might lead to confusion in wards. For example, patients can be admitted to the ward of another inpatient specialty due to lack of resources (i.e., bed). This study aims to develop a simulation-optimization model to reallocate the available number of beds in a mid-sized hospital in the UK. A hospital simulation model was developed to capture the stochastic behaviours of the hospital by taking into account the accident and emergency department, all outpatient and inpatient services, and the interactions between each other. A couple of outputs of the simulation model (e.g., average length of stay and revenue) were generated as inputs to be used in the optimization model. An integer linear programming was developed under a number of constraints (financial, demand, target level of bed occupancy rate and staffing level) with the aims of maximizing number of admitted patients. In addition, a sensitivity analysis was carried out by taking into account unexpected increases on inpatient demand over the next 12 months. As a result, the major findings of the approach proposed in this study optimally reallocate the available number of beds for each inpatient speciality and reveal that 74 beds are idle. In addition, the findings of the study indicate that the hospital wards will be able to cope with 14% demand increase at most in the projected year. In conclusion, this paper sheds a new light on how best to reallocate beds in order to cope with current and future demand for healthcare services.

Keywords: bed occupancy rate, bed reallocation, discrete event simulation, inpatient admissions, integer linear programming, projected usage

Procedia PDF Downloads 118
2534 Modeling of Leaks Effects on Transient Dispersed Bubbly Flow

Authors: Mohand Kessal, Rachid Boucetta, Mourad Tikobaini, Mohammed Zamoum

Abstract:

Leakage problem of two-component fluids flow is modeled for a transient one-dimensional homogeneous bubbly flow and developed by taking into account the effect of a leak located at the middle point of the pipeline. The corresponding three conservation equations are numerically resolved by an improved characteristic method. The obtained results are explained and commented in terms of physical impact on the flow parameters.

Keywords: fluid transients, pipelines leaks, method of characteristics, leakage problem

Procedia PDF Downloads 445
2533 GPS Signal Correction to Improve Vehicle Location during Experimental Campaign

Authors: L. Della Ragione, G. Meccariello

Abstract:

In recent years the progress of the automobile industry in Italy in the field of reduction of emissions values is very remarkable. Nevertheless, their evaluation and reduction is a key problem, especially in the cities, which account for more than 50% of world population. In this paper we dealt with the problem of describing a quantitative approach for the reconstruction of GPS coordinates and altitude, in the context of correlation study between driving cycles / emission / geographical location, during an experimental campaign realized with some instrumented cars.

Keywords: air pollution, driving cycles, GPS signal, vehicle location

Procedia PDF Downloads 404
2532 Identification and Origins of Multiple Personality: A Criterion from Wiggins

Authors: Brittany L. Kang

Abstract:

One familiar theory of the origin of multiple personalities focuses on how symptoms of trauma or abuse are central causes, as seen in paradigmatic examples of the condition. The theory states that multiple personalities constitute a congenital condition, as babies all exhibit multiplicity, and that generally alters only remain separated due to trauma. In more typical cases, the alters converge and become a single identity; only in cases of trauma, according to this account, do the alters remain separated. This theory is misleading in many aspects, the most prominent being that not all multiple personality patients are victims of child abuse or trauma, nor are all cases of multiple personality observed in early childhood. The use of this criterion also causes clinical problems, including an inability to identify multiple personalities through the variety of symptoms and traits seen across observed cases. These issues present a need for revision in the currently applied criterion in order to separate the notion of child abuse and to be able to better understand the origins of multiple personalities itself. Identifying multiplicity through the application of identity theories will improve the current criterion, offering a bridge between identifying existing cases and understanding their origins. We begin by applying arguments from Wiggins, who held that each personality within a multiple was not a whole individual, but rather characters who switch off. Wiggins’ theory is supported by observational evidence of how such characters are differentiated. Alters of older ages are seen to require different prescription lens, in addition to having different handwriting. The alters may also display drastically varying styles of clothing, preferences in food, their gender, sexuality, religious beliefs and more. The definitions of terms such as 'personality' or 'persons' also become more distinguished, leading to greater understanding of who is exactly able to be classified as a patient of multiple personalities. While a more common meaning of personality is a designation of specific characteristics which account for the entirety of a person, this paper argues from Wiggins’ theory that each 'personality' is in fact only partial. Clarification of the concept in question will allow for more successful future clinical applications.

Keywords: identification, multiple personalities, origin, Wiggins' theory

Procedia PDF Downloads 204
2531 Ensemble of Misplacement, Juxtaposing Feminine Identity in Time and Space: An Analysis of Works of Modern Iranian Female Photographers

Authors: Delaram Hosseinioun

Abstract:

In their collections, Shirin Neshat, Mitra Tabrizian, Gohar Dashti and Newsha Tavakolian adopt a hybrid form of narrative to confront the restrictions imposed on women in hegemonic public and private spaces. Focusing on motives such as social marginalisation, crisis of belonging, as well as lack of agency for women, the artists depict the regression of women’s rights in their respective generations. Based on the ideas of Michael Bakhtin, namely his concept of polyphony or the plurality of contradictory voices, the views of Judith Butler on giving an account to oneself and Henri Leverbre’s theories on social space, this study illustrates the artists’ concept of identity in crisis through time and space. The research explores how the artists took their art as a novel dimension to depict and confront the hardships imposed on Iranian women. Henri Lefebvre makes a distinction between complex social structures through which individuals situate, perceive and represent themselves. By adding Bakhtin’s polyphonic view to Lefebvre’s concepts of perceived and lived spaces, the study explores the sense of social fragmentation in the works of Dashti and Tavakolian. One argument is that as the representatives of the contemporary generation of female artists who spend their lives in Iran and faced a higher degree of restrictions, their hyperbolic and theatrical styles stand as a symbolic act of confrontation against restrictive socio-cultural norms imposed on women. Further, the research explores the possibility of reclaiming one's voice and sense of agency through art, corresponding with the Bakhtinian sense of polyphony and Butler’s concept of giving an account to oneself. Works of Neshat and Tabrizian as the representatives of the previous generation who faced exile and diaspora, encompass a higher degree of misplacement, violence and decay of women’s presence. In Their works, the women’s body encompasses Lefebvre’s dismantled temporal and special setting. Notably, the ongoing social conviction and gender-based dogma imposed on women frame some of the concurrent motives among the selected collections of the four artists. By applying an interdisciplinary lens and integrating the conducted interviews with the artists, the study illustrates how the artists seek a transcultural account for themselves and women in their generations. Further, the selected collections manifest the urgency for an authentic and liberal voice and setting for women, resonating with the concurrent Women, Life, Freedom movement in Iran.

Keywords: persian modern female photographers, transcultural studies, shirin neshat, mitra tabrizian, gohar dashti, newsha tavakolian, butler, bakhtin, lefebvre

Procedia PDF Downloads 49
2530 Integration of LCA and BIM for Sustainable Construction

Authors: Laura Álvarez Antón, Joaquín Díaz

Abstract:

The construction industry is turning towards sustainability. It is a well-known fact that sustainability is based on a balance between environmental, social and economic aspects. In order to achieve sustainability efficiently, these three criteria should be taken into account in the initial project phases, since that is when a project can be influenced most effectively. Thus the aim must be to integrate important tools like BIM and LCA at an early stage in order to make full use of their potential. With the synergies resulting from the integration of BIM and LCA, a wider approach to sustainability becomes possible, covering the three pillars of sustainability.

Keywords: building information modeling (BIM), construction industry, design phase, life cycle assessment (LCA), sustainability

Procedia PDF Downloads 420
2529 Prediction of Seismic Damage Using Scalar Intensity Measures Based on Integration of Spectral Values

Authors: Konstantinos G. Kostinakis, Asimina M. Athanatopoulou

Abstract:

A key issue in seismic risk analysis within the context of Performance-Based Earthquake Engineering is the evaluation of the expected seismic damage of structures under a specific earthquake ground motion. The assessment of the seismic performance strongly depends on the choice of the seismic Intensity Measure (IM), which quantifies the characteristics of a ground motion that are important to the nonlinear structural response. Several conventional IMs of ground motion have been used to estimate their damage potential to structures. Yet, none of them has been proved to be able to predict adequately the seismic damage. Therefore, alternative, scalar intensity measures, which take into account not only ground motion characteristics but also structural information have been proposed. Some of these IMs are based on integration of spectral values over a range of periods, in an attempt to account for the information that the shape of the acceleration, velocity or displacement spectrum provides. The adequacy of a number of these IMs in predicting the structural damage of 3D R/C buildings is investigated in the present paper. The investigated IMs, some of which are structure specific and some are nonstructure-specific, are defined via integration of spectral values. To achieve this purpose three symmetric in plan R/C buildings are studied. The buildings are subjected to 59 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along the structural axes. The response is determined by nonlinear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures are correlated with seven scalar ground motion IMs. The comparative assessment of the results revealed that the structure-specific IMs present higher correlation with the seismic damage of the three buildings. However, the adequacy of the IMs for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.

Keywords: damage measures, bidirectional excitation, spectral based IMs, R/C buildings

Procedia PDF Downloads 300
2528 Stability Analysis of Slopes during Pile Driving

Authors: Yeganeh Attari, Gudmund Reidar Eiksund, Hans Peter Jostad

Abstract:

In Geotechnical practice, there is no standard method recognized by the industry to account for the reduction of safety factor of a slope as an effect of soil displacement and pore pressure build-up during pile installation. Pile driving disturbs causes large strains and generates excess pore pressures in a zone that can extend many diameters from the installed pile, resulting in a decrease of the shear strength of the surrounding soil. This phenomenon may cause slope failure. Moreover, dissipation of excess pore pressure set-up may cause weakening of areas outside the volume of soil remoulded during installation. Because of complex interactions between changes in mean stress and shearing, it is challenging to predict installation induced pore pressure response. Furthermore, it is a complex task to follow the rate and path of pore pressure dissipation in order to analyze slope stability. In cohesive soils it is necessary to implement soil models that account for strain softening in the analysis. In the literature, several cases of slope failure due to pile driving activities have been reported, for instance, a landslide in Gothenburg that resulted in a slope failure destroying more than thirty houses and Rigaud landslide in Quebec which resulted in loss of life. Up to now, several methods have been suggested to predict the effect of pile driving on total and effective stress, pore pressure changes and their effect on soil strength. However, this is still not well understood or agreed upon. In Norway, general approaches applied by geotechnical engineers for this problem are based on old empirical methods with little accurate theoretical background. While the limitations of such methods are discussed, this paper attempts to capture the reduction in the factor of safety of a slope during pile driving, using coupled Finite Element analysis and cavity expansion method. This is demonstrated by analyzing a case of slope failure due to pile driving in Norway.

Keywords: cavity expansion method, excess pore pressure, pile driving, slope failure

Procedia PDF Downloads 124
2527 The TarMed Reform of 2014: A Causal Analysis of the Effects on the Behavior of Swiss Physicians

Authors: Camila Plaza, Stefan Felder

Abstract:

In October 2014, the TARMED reform was implemented in Switzerland. In an effort to even out the financial standing of general practitioners (including pediatricians) relative to that of specialists in the outpatient sector, the reform tackled two aspects: on the one hand, GPs would be able to bill an additional 9 CHF per patient, once per consult per day. This is referred to as the surcharge position. As a second measure, it reduced the fees for certain technical services targeted to specialists (e.g., imaging, surgical technical procedures, etc.). Given the fee-for-service reimbursement system in Switzerland, we predict that physicians reacted to the economic incentives of the reform by increasing the consults per patient and decreasing the average amount of time per consult. Within this framework, our treatment group is formed by GPs and our control group by those specialists who were not affected by the reform. Using monthly insurance claims panel data aggregated at the physician praxis level (provided by SASIS AG), for the period of January 2013-December 2015, we run difference in difference panel data models with physician and time fixed effects in order to test for the causal effects of the reform. We account for seasonality, and control for physician characteristics such as age, gender, specialty, and physician experience. Furthermore, we run the models on subgroups of physicians within our sample so as to account for heterogeneity and treatment intensities. Preliminary results support our hypothesis. We find evidence of an increase in consults per patients and a decrease in time per consult. Robustness checks do not significantly alter the results for our outcome variable of consults per patient. However, we do find a smaller effect of the reform for time per consult. Thus, the results of this paper could provide policymakers a better understanding of physician behavior and their sensitivity to financial incentives of reforms (both past and future) under the current reimbursement system.

Keywords: difference in differences, financial incentives, health reform, physician behavior

Procedia PDF Downloads 102
2526 A Coupled Model for Two-Phase Simulation of a Heavy Water Pressure Vessel Reactor

Authors: D. Ramajo, S. Corzo, M. Nigro

Abstract:

A Multi-dimensional computational fluid dynamics (CFD) two-phase model was developed with the aim to simulate the in-core coolant circuit of a pressurized heavy water reactor (PHWR) of a commercial nuclear power plant (NPP). Due to the fact that this PHWR is a Reactor Pressure Vessel type (RPV), three-dimensional (3D) detailed modelling of the large reservoirs of the RPV (the upper and lower plenums and the downcomer) were coupled with an in-house finite volume one-dimensional (1D) code in order to model the 451 coolant channels housing the nuclear fuel. Regarding the 1D code, suitable empirical correlations for taking into account the in-channel distributed (friction losses) and concentrated (spacer grids, inlet and outlet throttles) pressure losses were used. A local power distribution at each one of the coolant channels was also taken into account. The heat transfer between the coolant and the surrounding moderator was accurately calculated using a two-dimensional theoretical model. The implementation of subcooled boiling and condensation models in the 1D code along with the use of functions for representing the thermal and dynamic properties of the coolant and moderator (heavy water) allow to have estimations of the in-core steam generation under nominal flow conditions for a generic fission power distribution. The in-core mass flow distribution results for steady state nominal conditions are in agreement with the expected from design, thus getting a first assessment of the coupled 1/3D model. Results for nominal condition were compared with those obtained with a previous 1/3D single-phase model getting more realistic temperature patterns, also allowing visualize low values of void fraction inside the upper plenum. It must be mentioned that the current results were obtained by imposing prescribed fission power functions from literature. Therefore, results are showed with the aim of point out the potentiality of the developed model.

Keywords: PHWR, CFD, thermo-hydraulic, two-phase flow

Procedia PDF Downloads 441
2525 Translation and Adaptation of the Assessment Instrument “Kiddycat” for European Portuguese

Authors: Elsa Marta Soares, Ana Rita Valente, Cristiana Rodrigues, Filipa Gonçalves

Abstract:

Background: The assessment of feelings and attitudes of preschool children in relation to stuttering is crucial. Negative experiences can lead to anxiety, worry or frustration. To avoid the worsening of attitudes and feelings related to stuttering, it is important the early detection in order to intervene as soon as possible through an individualized intervention plan. Then it is important to have Portuguese instruments that allow this assessment. Aims: The aim of the present study is to realize the translation and adaptation of the Communication Attitude Test for Children in Preschool Age and Kindergarten (KiddyCat) for EP. Methodology: For the translation and adaptation process, a methodological study was carried out with the following steps: translation, back translation, assessment by a committee of experts and pre-test. This abstract describes the results of the first two phases of this process. The translation was accomplished by two bilingual individuals without experience in health and any knowledge about the instrument. One of them was an English teacher and the other one a Translator. The back-translation was conducted by two Senior Class Teachers that live in United Kingdom without any knowledge in health and about the instrument. Results and Discussion: In translation there were differences in semantic equivalences of various expressions and concepts. A discussion between the two translators, mediated by the researchers, allowed to achieve the consensus version of the translated instrument. Taking into account the original version of KiddyCAT the results demonstrated that back-translation versions were similar to the original version of this assessment instrument. Although the back-translators used different words, they were synonymous, maintaining semantic and idiomatic equivalences of the instrument’s items. Conclusion: This project contributes with an important resource that can be used in the assessment of feelings and attitudes of preschool children who stutter. This was the first phase of the research; expert panel and pretest are being developed. Therefore, it is expected that this instrument contributes to an holistic therapeutic intervention, taking into account the individual characteristics of each child.

Keywords: assessment, feelings and attitudes, preschool children, stuttering

Procedia PDF Downloads 126
2524 The Tramway in French Cities: Complication of Public Spaces and Complexity of the Design Process

Authors: Elisa Maître

Abstract:

The redeployment of tram networks in French cities has considerably modified public spaces and the way citizens use them. Above and beyond the image that trams have of contributing to the sustainable urban development, the question of safety for users in these spaces has not been studied much. This study is based on an analysis of use of public spaces laid out for trams, from the standpoint of legibility and safety concerns. The study also examines to what extent the complexity of the design process, with many interactions between numerous and varied players in this process has a role in the genesis of these problems. This work is mainly based on the analysis of links between the uses of these re-designed public spaces (through observations, interviews of users and accident studies) and the analysis of the design conditions and processes of the projects studied (mainly based on interviews with the actors of these projects). Practical analyses were based three points of view: that of the planner, that of the user (based on observations and interviews) and that of the road safety expert. The cities of Montpellier, Marseille and Nice are the three fields of study on which the demonstration of this thesis is based. On part, the results of this study allow showing that the insertion of tram poses some problems complication of public areas of French cities. These complications related to the restructuring of public spaces for the tram, create difficulties of use and safety concerns. On the other hand, interviews depth analyses, fully transcribed, have led us to develop particular dysfunction scenarios in the design process. These elements lead to question the way the legibility and safety of these new forms of public spaces are taken into account. Then, an in-depth analysis of the design processes of public spaces with trams systems would also be a way of better understanding the choices made, the compromises accepted, and the conflicts and constraints at work, weighing on the layout of these spaces. The results presented concerning the impact that spaces laid out for trams have on the difficulty of use, suggest different possibilities for improving the way in which safety for all users is taken into account in designing public spaces.

Keywords: public spaces, road layout, users, design process of urban projects

Procedia PDF Downloads 207
2523 Planning a Haemodialysis Process by Minimum Time Control of Hybrid Systems with Sliding Motion

Authors: Radoslaw Pytlak, Damian Suski

Abstract:

The aim of the paper is to provide a computational tool for planning a haemodialysis process. It is shown that optimization methods can be used to obtain the most effective treatment focused on removing both urea and phosphorus during the process. In order to achieve that, the IV–compartment model of phosphorus kinetics is applied. This kinetics model takes into account a rebound phenomenon that can occur during haemodialysis and results in a hybrid model of the process. Furthermore, vector fields associated with the model equations are such that it is very likely that using the most intuitive objective functions in the planning problem could lead to solutions which include sliding motions. Therefore, building computational tools for solving the problem of planning a haemodialysis process has required constructing numerical algorithms for solving optimal control problems with hybrid systems. The paper concentrates on minimum time control of hybrid systems since this control objective is the most suitable for the haemodialysis process considered in the paper. The presented approach to optimal control problems with hybrid systems is different from the others in several aspects. First of all, it is assumed that a hybrid system can exhibit sliding modes. Secondly, the system’s motion on the switching surface is described by index 2 differential–algebraic equations, and that guarantees accurate tracking of the sliding motion surface. Thirdly, the gradients of the problem’s functionals are evaluated with the help of adjoint equations. The adjoint equations presented in the paper take into account sliding motion and exhibit jump conditions at transition times. The optimality conditions in the form of the weak maximum principle for optimal control problems with hybrid systems exhibiting sliding modes and with piecewise constant controls are stated. The presented sensitivity analysis can be used to construct globally convergent algorithms for solving considered problems. The paper presents numerical results of solving the haemodialysis planning problem.

Keywords: haemodialysis planning process, hybrid systems, optimal control, sliding motion

Procedia PDF Downloads 169
2522 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories

Authors: Haj Najafi Leila, Tehranizadeh Mohsen

Abstract:

Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.

Keywords: dependency, story-cost, cost modes, engineering demand parameter

Procedia PDF Downloads 152
2521 LES Simulation of a Thermal Plasma Jet with Modeled Anode Arc Attachment Effects

Authors: N. Agon, T. Kavka, J. Vierendeels, M. Hrabovský, G. Van Oost

Abstract:

A plasma jet model was developed with a rigorous method for calculating the thermophysical properties of the gas mixture without mixing rules. A simplified model approach to account for the anode effects was incorporated in this model to allow the valorization of the simulations with experimental results. The radial heat transfer was under-predicted by the model because of the limitations of the radiation model, but the calculated evolution of centerline temperature, velocity and gas composition downstream of the torch exit corresponded well with the measured values. The CFD modeling of thermal plasmas is either focused on development of the plasma arc or the flow of the plasma jet outside of the plasma torch. In the former case, the Maxwell equations are coupled with the Navier-Stokes equations to account for electromagnetic effects which control the movements of the anode arc attachment. In plasma jet simulations, however, the computational domain starts from the exit nozzle of the plasma torch and the influence of the arc attachment fluctuations on the plasma jet flow field is not included in the calculations. In that case, the thermal plasma flow is described by temperature, velocity and concentration profiles at the torch exit nozzle and no electromagnetic effects are taken into account. This simplified approach is widely used in literature and generally acceptable for plasma torches with a circular anode inside the torch chamber. The unique DC hybrid water/gas-stabilized plasma torch developed at the Institute of Plasma Physics of the Czech Academy of Sciences on the other hand, consists of a rotating anode disk, located outside of the torch chamber. Neglecting the effects of the anode arc attachment downstream of the torch exit nozzle leads to erroneous predictions of the flow field. With the simplified approach introduced in this model, the Joule heating between the exit nozzle and the anode attachment position of the plasma arc is modeled by a volume heat source and the jet deflection caused by the anode processes by a momentum source at the anode surface. Furthermore, radiation effects are included by the net emission coefficient (NEC) method and diffusion is modeled with the combined diffusion coefficient method. The time-averaged simulation results are compared with numerous experimental measurements. The radial temperature profiles were obtained by spectroscopic measurements at different axial positions downstream of the exit nozzle. The velocity profiles were evaluated from the time-dependent evolution of flow structures, recorded by photodiode arrays. The shape of the plasma jet was compared with charge-coupled device (CCD) camera pictures. In the cooler regions, the temperature was measured by enthalpy probe downstream of the exit nozzle and by thermocouples in radial direction around the torch nozzle. The model results correspond well with the experimental measurements. The decrease in centerline temperature and velocity is predicted within an acceptable range and the shape of the jet closely resembles the jet structure in the recorded images. The temperatures at the edge of the jet are underestimated due to the absence of radial radiative heat transfer in the model.

Keywords: anode arc attachment, CFD modeling, experimental comparison, thermal plasma jet

Procedia PDF Downloads 338