Search results for: dividend distribution policy
6512 Understanding Talent Management In French Small And Medium-Sized Enterprises: Towards Multi-Level Modeling
Authors: Abid Kousay
Abstract:
Appeared and developed essentially in large companies and multinationals, Talent Management (TM) in Small and Medium-Sized Enterprises (SMEs) has remained an under-explored subject till today. Although the literature on TM in the Anglo-Saxon context is developing, it remains monopolized in non-European contexts, especially in France. Therefore, this article aims to address these shortcomings through contributing to TM issues by adopting a multilevel approach holding the goal of reaching a global holistic vision of interactions between various levels while applying TM. A qualitative research study carried out within 12 SMEs in France, built on the methodological perspective of grounded theory, will be used in order to go beyond description, to generate or discover a theory or even a unified theoretical explanation. Our theoretical contributions are the results of the grounded theory, the fruit of context considerations and the dynamic of the multilevel approach. We aim firstly to determine the perception of talent and TM in SMEs. Secondly, we formalize TM in SME through the empowerment of all 3 levels in the organization (individual, collective, and organizational). And we generate a multilevel dynamic system model, highlighting the institutionalization dimension in SMEs and the managerial conviction characterized by the domination of the leader’s role. Thirdly, this first study sheds light on the importance of rigorous implementation of TM in SMEs in France by directing CEO and HR and TM managers to focus on elements that upstream TM implementation and influence the system internally. Indeed, our systematic multilevel approach policy reminds them of the importance of strategic alignment while translating TM policy into strategies and practices in SMEs.Keywords: French context, multilevel approach, talent management, , TM system
Procedia PDF Downloads 2166511 Global Healthcare Village Based on Mobile Cloud Computing
Authors: Laleh Boroumand, Muhammad Shiraz, Abdullah Gani, Rashid Hafeez Khokhar
Abstract:
Cloud computing being the use of hardware and software that are delivered as a service over a network has its application in the area of health care. Due to the emergency cases reported in most of the medical centers, prompt for an efficient scheme to make health data available with less response time. To this end, we propose a mobile global healthcare village (MGHV) model that combines the components of three deployment model which include country, continent and global health cloud to help in solving the problem mentioned above. In the creation of continent model, two (2) data centers are created of which one is local and the other is global. The local replay the request of residence within the continent, whereas the global replay the requirements of others. With the methods adopted, there is an assurance of the availability of relevant medical data to patients, specialists, and emergency staffs regardless of locations and time. From our intensive experiment using the simulation approach, it was observed that, broker policy scheme with respect to optimized response time, yields a very good performance in terms of reduction in response time. Though, our results are comparable to others when there is an increase in the number of virtual machines (80-640 virtual machines). The proportionality in increase of response time is within 9%. The results gotten from our simulation experiments shows that utilizing MGHV leads to the reduction of health care expenditures and helps in solving the problems of unqualified medical staffs faced by both developed and developing countries.Keywords: cloud computing (MCC), e-healthcare, availability, response time, service broker policy
Procedia PDF Downloads 3776510 Multi-Agent System Based Distributed Voltage Control in Distribution Systems
Authors: A. Arshad, M. Lehtonen. M. Humayun
Abstract:
With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids
Procedia PDF Downloads 3126509 Reconstruction of Age-Related Generations of Siberian Larch to Quantify the Climatogenic Dynamics of Woody Vegetation Close the Upper Limit of Its Growth
Authors: A. P. Mikhailovich, V. V. Fomin, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova
Abstract:
Woody vegetation among the upper limit of its habitat is a sensitive indicator of biota reaction to regional climate changes. Quantitative assessment of temporal and spatial changes in the distribution of trees and plant biocenoses calls for the development of new modeling approaches based upon selected data from measurements on the ground level and ultra-resolution aerial photography. Statistical models were developed for the study area located in the Polar Urals. These models allow obtaining probabilistic estimates for placing Siberian Larch trees into one of the three age intervals, namely 1-10, 11-40 and over 40 years, based on the Weilbull distribution of the maximum horizontal crown projection. Authors developed the distribution map for larch trees with crown diameters exceeding twenty centimeters by deciphering aerial photographs made by a UAV from an altitude equal to fifty meters. The total number of larches was equal to 88608, forming the following distribution row across the abovementioned intervals: 16980, 51740, and 19889 trees. The results demonstrate that two processes can be observed in the course of recent decades: first is the intensive forestation of previously barren or lightly wooded fragments of the study area located within the patches of wood, woodlands, and sparse stand, and second, expansion into mountain tundra. The current expansion of the Siberian Larch in the region replaced the depopulation process that occurred in the course of the Little Ice Age from the late 13ᵗʰ to the end of the 20ᵗʰ century. Using data from field measurements of Siberian larch specimen biometric parameters (including height, diameter at root collar and at 1.3 meters, and maximum projection of the crown in two orthogonal directions) and data on tree ages obtained at nine circular test sites, authors developed a model for artificial neural network including two layers with three and two neurons, respectively. The model allows quantitative assessment of a specimen's age based on height and maximum crone projection values. Tree height and crown diameters can be quantitatively assessed using data from aerial photographs and lidar scans. The resulting model can be used to assess the age of all Siberian larch trees. The proposed approach, after validation, can be applied to assessing the age of other tree species growing near the upper tree boundaries in other mountainous regions. This research was collaboratively funded by the Russian Ministry for Science and Education (project No. FEUG-2023-0002) and Russian Science Foundation (project No. 24-24-00235) in the field of data modeling on the basis of artificial neural network.Keywords: treeline, dynamic, climate, modeling
Procedia PDF Downloads 836508 Agent-Based Modeling to Simulate the Dynamics of Health Insurance Markets
Authors: Haripriya Chakraborty
Abstract:
The healthcare system in the United States is considered to be one of the most inefficient and expensive systems when compared to other developed countries. Consequently, there are persistent concerns regarding the overall functioning of this system. For instance, the large number of uninsured individuals and high premiums are pressing issues that are shown to have a negative effect on health outcomes with possible life-threatening consequences. The Affordable Care Act (ACA), which was signed into law in 2010, was aimed at improving some of these inefficiencies. This paper aims at providing a computational mechanism to examine some of these inefficiencies and the effects that policy proposals may have on reducing these inefficiencies. Agent-based modeling is an invaluable tool that provides a flexible framework to model complex systems. It can provide an important perspective into the nature of some interactions that occur and how the benefits of these interactions are allocated. In this paper, we propose a novel and versatile agent-based model with realistic assumptions to simulate the dynamics of a health insurance marketplace that contains a mixture of private and public insurers and individuals. We use this model to analyze the characteristics, motivations, payoffs, and strategies of these agents. In addition, we examine the effects of certain policies, including some of the provisions of the ACA, aimed at reducing the uninsured rate and the cost of premiums to move closer to a system that is more equitable and improves health outcomes for the general population. Our test results confirm the usefulness of our agent-based model in studying this complicated issue and suggest some implications for public policies aimed at healthcare reform.Keywords: agent-based modeling, healthcare reform, insurance markets, public policy
Procedia PDF Downloads 1386507 Drinking Water Quality of Lahore Pakistan: A Comparison of Quality of Drinking Water from Source and Distribution System
Authors: Zainab Abbas Soharwardi, Chunli Su, Fazeelat Tahira, Syed Zahid Aziz
Abstract:
The study monitors the quality of drinking water consumed by urban population of Lahore. A total of 50 drinking water samples (16 from source and 34 from distribution system) were examined for physical, chemical and bacteriological parameters. The parameters including pH, turbidity, electrical conductivity, total dissolved solids, total hardness, calcium, magnesium, total alkalinity, carbonate, sulphate, chloride, nitrite, fluoride, sodium and potassium were analyzed. Sixteen out of fifty samples showed high values of alkalinity compared to EPA standards and WHO guidelines. Twenty-eight samples were analyzed for heavy metals, chromium, iron, copper, zinc, cadmium and lead. Trace amounts of heavy metals were detected in some samples, however for most of the samples values were within the permissible limits although high concentration of zinc was detected in one sample collected from Mughal Pura area. Fifteen samples were analyzed for arsenic. The results were unsatisfactory; around 73% samples showed exceeding values of As. WHO has suggested permissible limits of arsenic < 0.01 ppm, whereas 27 % of samples have shown 0.05 ppm arsenic, which is five times greater than WHO highest permissible limits. All the samples were examined for E. coli bacteria. On the basis of bacteriological analysis, 42 % samples did not meet WHO guidelines and were unsafe for drinking.Keywords: arsenic, heavy metals, ground water, Lahore
Procedia PDF Downloads 3426506 Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces
Authors: Shweta Singh, Sudaman Katti
Abstract:
The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning.Keywords: convolutional neural networks, reinforcement learning, self-attention, transformers, unity
Procedia PDF Downloads 1366505 Towards a Multilevel System of Talent Management in Small And Medium-Sized Enterprises: French Context Exploration
Authors: Abid Kousay
Abstract:
Appeared and developed essentially in large companies and multinationals, Talent Management (TM) in Small and Medium-Sized Enterprises (SMEs) has remained an under-explored subject till today. Although the literature on TM in the Anglo-Saxon context is developing, it remains monopolized in non-European contexts, especially in France. Therefore, this article aims to address these shortcomings through contributing to TM issues, by adopting a multilevel approach holding the goal of reaching a global holistic vision of interactions between various levels, while applying TM. A qualitative research study carried out within 12 SMEs in France, built on the methodological perspective of grounded theory, will be used in order to go beyond description, to generate or discover a theory or even a unified theoretical explanation. Our theoretical contributions are the results of the grounded theory, the fruit of context considerations and the dynamic of the multilevel approach. We aim firstly to determine the perception of talent and TM in SMEs. Secondly, we formalize TM in SME through the empowerment of all 3 levels in the organization (individual, collective, and organizational). And we generate a multilevel dynamic system model, highlighting the institutionalization dimension in SMEs and the managerial conviction characterized by the domination of the leader's role. Thirdly, this first study shed the light on the importance of rigorous implementation of TM in SMEs in France by directing CEO and HR and TM managers to focus on elements that upstream TM implementation and influence the system internally. Indeed, our systematic multilevel approach policy reminds them of the importance of the strategic alignment while translating TM policy into strategies and practices in SMEs.Keywords: French context, institutionalization, talent, multilevel approach, talent management system
Procedia PDF Downloads 2006504 Study on Natural Light Distribution Inside the Room by Using Sudare as an Outside Horizontal Blind in Tropical Country of Indonesia
Authors: Agus Hariyadi, Hiroatsu Fukuda
Abstract:
In tropical country like Indonesia, especially in Jakarta, most of the energy consumption on building is for the cooling system, the second one is from lighting electric consumption. One of the passive design strategy that can be done is optimizing the use of natural light from the sun. In this area, natural light is always available almost every day around the year. Natural light have many effect on building. It can reduce the need of electrical lighting but also increase the external load. Another thing that have to be considered in the use of natural light is the visual comfort from occupant inside the room. To optimize the effectiveness of natural light need some modification of façade design. By using external shading device, it can minimize the external load that introduces into the room, especially from direct solar radiation which is the 80 % of the external energy load that introduces into the building. It also can control the distribution of natural light inside the room and minimize glare in the perimeter zone of the room. One of the horizontal blind that can be used for that purpose is Sudare. It is traditional Japanese blind that have been used long time in Japanese traditional house especially in summer. In its original function, Sudare is used to prevent direct solar radiation but still introducing natural ventilation. It has some physical characteristics that can be utilize to optimize the effectiveness of natural light. In this research, different scale of Sudare will be simulated using EnergyPlus and DAYSIM simulation software. EnergyPlus is a whole building energy simulation program to model both energy consumption—for heating, cooling, ventilation, lighting, and plug and process loads—and water use in buildings, while DAYSIM is a validated, RADIANCE-based daylighting analysis software that models the annual amount of daylight in and around buildings. The modelling will be done in Ladybug and Honeybee plugin. These are two open source plugins for Grasshopper and Rhinoceros 3D that help explore and evaluate environmental performance which will directly be connected to EnergyPlus and DAYSIM engines. Using the same model will maintain the consistency of the same geometry used both in EnergyPlus and DAYSIM. The aims of this research is to find the best configuration of façade design which can reduce the external load from the outside of the building to minimize the need of energy for cooling system but maintain the natural light distribution inside the room to maximize the visual comfort for occupant and minimize the need of electrical energy consumption.Keywords: façade, natural light, blind, energy
Procedia PDF Downloads 3456503 Using Geospatial Analysis to Reconstruct the Thunderstorm Climatology for the Washington DC Metropolitan Region
Authors: Mace Bentley, Zhuojun Duan, Tobias Gerken, Dudley Bonsal, Henry Way, Endre Szakal, Mia Pham, Hunter Donaldson, Chelsea Lang, Hayden Abbott, Leah Wilcynzski
Abstract:
Air pollution has the potential to modify the lifespan and intensity of thunderstorms and the properties of lightning. Using data mining and geovisualization, we investigate how background climate and weather conditions shape variability in urban air pollution and how this, in turn, shapes thunderstorms as measured by the intensity, distribution, and frequency of cloud-to-ground lightning. A spatiotemporal analysis was conducted in order to identify thunderstorms using high-resolution lightning detection network data. Over seven million lightning flashes were used to identify more than 196,000 thunderstorms that occurred between 2006 - 2020 in the Washington, DC Metropolitan Region. Each lightning flash in the dataset was grouped into thunderstorm events by means of a temporal and spatial clustering algorithm. Once the thunderstorm event database was constructed, hourly wind direction, wind speed, and atmospheric thermodynamic data were added to the initiation and dissipation times and locations for the 196,000 identified thunderstorms. Hourly aerosol and air quality data for the thunderstorm initiation times and locations were also incorporated into the dataset. Developing thunderstorm climatologies using a lightning tracking algorithm and lightning detection network data was found to be useful for visualizing the spatial and temporal distribution of urban augmented thunderstorms in the region.Keywords: lightning, urbanization, thunderstorms, climatology
Procedia PDF Downloads 766502 A Framework Based on Dempster-Shafer Theory of Evidence Algorithm for the Analysis of the TV-Viewers’ Behaviors
Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi
Abstract:
In this paper, we propose an approach of detecting the behavior of the viewers of a TV program in a non-controlled environment. The experiment we propose is based on the use of three types of connected objects (smartphone, smart watch, and a connected remote control). 23 participants were observed while watching their TV programs during three phases: before, during and after watching a TV program. Their behaviors were detected using an approach based on The Dempster Shafer Theory (DST) in two phases. The first phase is to approximate dynamically the mass functions using an approach based on the correlation coefficient. The second phase is to calculate the approximate mass functions. To approximate the mass functions, two approaches have been tested: the first approach was to divide each features data space into cells; each one has a specific probability distribution over the behaviors. The probability distributions were computed statistically (estimated by empirical distribution). The second approach was to predict the TV-viewing behaviors through the use of classifiers algorithms and add uncertainty to the prediction based on the uncertainty of the model. Results showed that mixing the fusion rule with the computation of the initial approximate mass functions using a classifier led to an overall of 96%, 95% and 96% success rate for the first, second and third TV-viewing phase respectively. The results were also compared to those found in the literature. This study aims to anticipate certain actions in order to maintain the attention of TV viewers towards the proposed TV programs with usual connected objects, taking into account the various uncertainties that can be generated.Keywords: Iot, TV-viewing behaviors identification, automatic classification, unconstrained environment
Procedia PDF Downloads 2296501 Erosion of Culture through Democratization
Authors: Mladen Milicevic
Abstract:
This paper explores how the explosion of computer technologies has allowed for the democratization of many aspects of human activities, which were in the past only available through the institutionalized channels of production and distribution. We will going to use as an example the music recording industries, just to illustrate this process, but the analogies to other activities and aspects of human life can easily be extrapolated from it.Keywords: aura, democratization, music industry, music sharing, paradigm-shift
Procedia PDF Downloads 2366500 The Impact of Political Leadership on Cameroon’s Economic Development From 2000 to 2023
Authors: Okpu Enoh Ndip Nkongho
Abstract:
The type of political leadership in place impacts a state's economic development or underdevelopment directly and indirectly. One of the main challenges to Cameroon's economic development may be ineffective or misguided political leadership. The economy of the Cameroon state has declined significantly due to a number of factors, including a lack of effective and feasible economic policies, a reliance on crude oil that is excessive, tribal politics, the threat of insurgency, bribery, and corruption, violations of human rights, neglect of other sectors like science, technology, education, and transportation, and a careless attitude on the part of the administrators toward the general public. As a result, the standard of living has decreased, foreign exchange has decreased, and the value of the Cameroonian currency has depreciated. Therefore, from 2000 to 2023, this paper focused on the relationship between political leadership and economic development in Cameroon and offered suggestions for improving political leadership that will, in turn, lead to the country's economy getting back on track. The study employed a qualitative technique, with the framework for the investigation derived from the trait theory of leadership. According to the information provided above, the paper was able to conclude that there is a lack of cooperation between the three branches of government in Cameroon. This is shown in situations when one branch operates independently of the others and refuses to function as a backup when needed. The study recommended that the Executive collaborate closely with the National Assembly to speed action on some key legislation required to stimulate economic development. On the other hand, there is a need for more clarity and consistency in the government's policy orientation. There is no doubt that our current economic troubles are at least partially the result of a lack of economic policy leadership and confidence.Keywords: politics, leadership, economic, development, Cameroon
Procedia PDF Downloads 536499 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables
Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner
Abstract:
High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line
Procedia PDF Downloads 1736498 Effect of Class V Cavity Configuration and Loading Situation on the Stress Concentration
Authors: Jia-Yu Wu, Chih-Han Chang, Shu-Fen Chuang, Rong-Yang Lai
Abstract:
Objective: This study was to examine the stress distribution of tooth with different class V restorations under different loading situations and geometry by 3D finite element (FE) analysis. `Methods: A series of FE models of mandibular premolars containing class V cavities were constructed using micro-CT. The class V cavities were assigned as the combinations of different cavity depths x occlusal -gingival heights: 1x2, 1x4, 2x2, and 2x4 mm. Three alveolar bone loss conditions were examined: 0, 1, and 2 mm. 200 N force was exerted on the buccal cusp tip under various directions (vertical, V; obliquely 30° angled, O; oblique and parallel the individual occlusal cavity wall, P). A 3-D FE analysis was performed and the von-Mises stress was used to summarize the data of stress distribution and maximum stress. Results: The maximal stress did not vary in different alveolar bone heights. For each geometry, the maximal stress was found at bilateral corners of the cavity. The peak stress of restorations was significantly higher under load P compared to those under loads V and O while the latter two were similar. 2x2mm cavity exhibited significantly increased (2.88 fold) stress under load P compared to that under load V, followed by 1x2mm (2.11 fold), 2x4mm (1.98 fold) and 1x4mm (1.1fold). Conclusion: Load direction causes the greatest impact on the results of stress, while the effect of alveolar bone loss is minor. Load direction parallel to the cavity wall may enhance the stress concentration especially in deep and narrow class cavities.Keywords: class v restoration, finite element analysis, loading situation, stress
Procedia PDF Downloads 2436497 Trade Openness, Productivity Growth And Economic Growth: Nigeria’s Experience
Authors: S. O. Okoro
Abstract:
Some words become the catch phrase of a particular decade. Globalization, Openness, and Privatization are certainly among the most frequently encapsulation of 1990’s; the market is ‘in’, ‘the state is out’. In the 1970’s, there were many political economists who spoke of autarky as one possible response to global economic forces. Be self-contained, go it alone, put up barriers to trans-nationalities, put in place import-substitution industrialization policy and grow domestic industries. In 1990’s, the emasculation of the state is by no means complete, but there is an acceptance that the state’s power is circumscribed by forces beyond its control and potential leverage. Autarky is no longer as a policy option. Nigeria, since its emergence as an independent nation, has evolved two macroeconomic management regimes of the interventionist and market friendly styles. This paper investigates Nigeria’s growth performance over the periods incorporating these two regimes and finds that there is no structural break in Total Factor Productivity, (TFP) growth and besides, the TFP growth over the entire period of study 1970-2012 is very negligible and hence growth can only be achieved by the unsustainable factor accumulation. Another important finding of this work is that the openness-human capital interaction term has a significant impact on the TFP growth, but the sign of the estimated coefficient does not meet it a theoretical expectation. This is because the negative coefficient on the human capital outweighs the positive openness effect. The poor quality of human capital is considered to have given rise to this. Given these results a massive investment in the education sector is required. The investment should be targeted at reforms that go beyond mere structural reforms to a reform agenda that will improve the quality of human capital in Nigeria.Keywords: globalization, emasculation, openness and privatization, total factor productivity
Procedia PDF Downloads 2426496 Comparison of Various Policies under Different Maintenance Strategies on a Multi-Component System
Authors: Demet Ozgur-Unluakin, Busenur Turkali, Ayse Karacaorenli
Abstract:
Maintenance strategies can be classified into two types, which are reactive and proactive, with respect to the time of the failure and maintenance. If the maintenance activity is done after a breakdown, it is called reactive maintenance. On the other hand, proactive maintenance, which is further divided as preventive and predictive, focuses on maintaining components before a failure occurs to prevent expensive halts. Recently, the number of interacting components in a system has increased rapidly and therefore, the structure of the systems have become more complex. This situation has made it difficult to provide the right maintenance decisions. Herewith, determining effective decisions has played a significant role. In multi-component systems, many methodologies and strategies can be applied when a component or a system has already broken down or when it is desired to identify and avoid proactively defects that could lead to future failure. This study focuses on the comparison of various maintenance strategies on a multi-component dynamic system. Components in the system are hidden, although there exists partial observability to the decision maker and they deteriorate in time. Several predefined policies under corrective, preventive and predictive maintenance strategies are considered to minimize the total maintenance cost in a planning horizon. The policies are simulated via Dynamic Bayesian Networks on a multi-component system with different policy parameters and cost scenarios, and their performances are evaluated. Results show that when the difference between the corrective and proactive maintenance cost is low, none of the proactive maintenance policies is significantly better than the corrective maintenance. However, when the difference is increased, at least one policy parameter for each proactive maintenance strategy gives significantly lower cost than the corrective maintenance.Keywords: decision making, dynamic Bayesian networks, maintenance, multi-component systems, reliability
Procedia PDF Downloads 1296495 Modelling the Yield Stress of Magnetorheological Fluids
Authors: Hesam Khajehsaeid, Naeimeh Alagheband
Abstract:
Magnetorheological fluids (MRF) are a category of smart materials. They exhibit a reversible change from a Newtonian-like fluid to a semi-solid state upon application of an external magnetic field. In contrast to ordinary fluids, MRFs can tolerate shear stresses up to a threshold value called yield stress which strongly depends on the strength of the magnetic field, magnetic particles volume fraction and temperature. Even beyond the yield, a magnetic field can increase MR fluid viscosity up to several orders. As yield stress is an important parameter in the design of MR devices, in this work, the effects of magnetic field intensity and magnetic particle concentration on the yield stress of MRFs are investigated. Four MRF samples with different particle concentrations are developed and tested through flow-ramp analysis to obtain the flow curves at a range of magnetic field intensity as well as shear rate. The viscosity of the fluids is determined by means of the flow curves. The results are then used to determine the yield stresses by means of the steady stress sweep method. The yield stresses are then determined by means of a modified form of the dipole model as well as empirical models. The exponential distribution function is used to describe the orientation of particle chains in the dipole model under the action of the external magnetic field. Moreover, the modified dipole model results in a reasonable distribution of chains compared to previous similar models.Keywords: magnetorheological fluids, yield stress, particles concentration, dipole model
Procedia PDF Downloads 1796494 Effects of the Fractional Order on Nanoparticles in Blood Flow through the Stenosed Artery
Authors: Mohammed Abdulhameed, Sagir M. Abdullahi
Abstract:
In this paper, based on the applications of nanoparticle, the blood flow along with nanoparticles through stenosed artery is studied. The blood is acted by periodic body acceleration, an oscillating pressure gradient and an external magnetic field. The mathematical formulation is based on Caputo-Fabrizio fractional derivative without singular kernel. The model of ordinary blood, corresponding to time-derivatives of integer order, is obtained as a limiting case. Analytical solutions of the blood velocity and temperature distribution are obtained by means of the Hankel and Laplace transforms. Effects of the order of Caputo-Fabrizio time-fractional derivatives and three different nanoparticles i.e. Fe3O4, TiO4 and Cu are studied. The results highlights that, models with fractional derivatives bring significant differences compared to the ordinary model. It is observed that the addition of Fe3O4 nanoparticle reduced the resistance impedance of the blood flow and temperature distribution through bell shape stenosed arteries as compared to TiO4 and Cu nanoparticles. On entering in the stenosed area, blood temperature increases slightly, but, increases considerably and reaches its maximum value in the stenosis throat. The shears stress has variation from a constant in the area without stenosis and higher in the layers located far to the longitudinal axis of the artery. This fact can be an important for some clinical applications in therapeutic procedures.Keywords: nanoparticles, blood flow, stenosed artery, mathematical models
Procedia PDF Downloads 2676493 Concept Mapping to Reach Consensus on an Antibiotic Smart Use Strategy Model to Promote and Support Appropriate Antibiotic Prescribing in a Hospital, Thailand
Authors: Phenphak Horadee, Rodchares Hanrinth, Saithip Suttiruksa
Abstract:
Inappropriate use of antibiotics has happened in several hospitals, Thailand. Drug use evaluation (DUE) is one strategy to overcome this difficulty. However, most community hospitals still encounter incomplete evaluation resulting overuse of antibiotics with high cost. Consequently, drug-resistant bacteria have been rising due to inappropriate antibiotic use. The aim of this study was to involve stakeholders in conceptualizing, developing, and prioritizing a feasible intervention strategy to promote and support appropriate antibiotic prescribing in a community hospital, Thailand. Study antibiotics included four antibiotics such as Meropenem, Piperacillin/tazobactam, Amoxicillin/clavulanic acid, and Vancomycin. The study was conducted for the 1-year period between March 1, 2018, and March 31, 2019, in a community hospital in the northeastern part of Thailand. Concept mapping was used in a purposive sample, including doctors (one was an administrator), pharmacists, and nurses who involving drug use evaluation of antibiotics. In-depth interviews for each participant and survey research were conducted to seek the problems for inappropriate use of antibiotics based on drug use evaluation system. Seventy-seven percent of DUE reported appropriate antibiotic prescribing, which still did not reach the goal of 80 percent appropriateness. Meropenem led other antibiotics for inappropriate prescribing. The causes of the unsuccessful DUE program were classified into three themes such as personnel, lack of public relation and communication, and unsupported policy and impractical regulations. During the first meeting, stakeholders (n = 21) expressed the generation of interventions. During the second meeting, participants who were almost the same group of people in the first meeting (n = 21) were requested to independently rate the feasibility and importance of each idea and to categorize them into relevant clusters to facilitate multidimensional scaling and hierarchical cluster analysis. The outputs of analysis included the idealist, cluster list, point map, point rating map, cluster map, and cluster rating map. All of these were distributed to participants (n = 21) during the third meeting to reach consensus on an intervention model. The final proposed intervention strategy included 29 feasible and crucial interventions in seven clusters: development of information technology system, establishing policy and taking it into the action plan, proactive public relations of the policy, action plan and workflow, in cooperation of multidisciplinary teams in drug use evaluation, work review and evaluation with performance reporting, promoting and developing professional and clinical skill for staff with training programs, and developing practical drug use evaluation guideline for antibiotics. These interventions are relevant and fit to several intervention strategies for antibiotic stewardship program in many international organizations such as participation of the multidisciplinary team, developing information technology to support antibiotic smart use, and communication. These interventions were prioritized for implementation over a 1-year period. Once the possibility of each activity or plan is set up, the proposed program could be applied and integrated into hospital policy after evaluating plans. Effectiveness of each intervention could be promoted to other community hospitals to promote and support antibiotic smart use.Keywords: antibiotic, concept mapping, drug use evaluation, multidisciplinary teams
Procedia PDF Downloads 1206492 Complex Network Analysis of Seismicity and Applications to Short-Term Earthquake Forecasting
Authors: Kahlil Fredrick Cui, Marissa Pastor
Abstract:
Earthquakes are complex phenomena, exhibiting complex correlations in space, time, and magnitude. Recently, the concept of complex networks has been used to shed light on the statistical and dynamical characteristics of regional seismicity. In this work, we study the relationships and interactions of seismic regions in Chile, Japan, and the Philippines through weighted and directed complex network analysis. Geographical areas are digitized into cells of fixed dimensions which in turn become the nodes of the network when an earthquake has occurred therein. Nodes are linked if a correlation exists between them as determined and measured by a correlation metric. The networks are found to be scale-free, exhibiting power-law behavior in the distributions of their different centrality measures: the in- and out-degree and the in- and out-strength. The evidence is also found of preferential interaction between seismically active regions through their degree-degree correlations suggesting that seismicity is dictated by the activity of a few active regions. The importance of a seismic region to the overall seismicity is measured using a generalized centrality metric taken to be an indicator of its activity or passivity. The spatial distribution of earthquake activity indicates the areas where strong earthquakes have occurred in the past while the passivity distribution points toward the likely locations an earthquake would occur whenever another one happens elsewhere. Finally, we propose a method that would project the location of the next possible earthquake using the generalized centralities coupled with correlations calculated between the latest earthquakes and a geographical point in the future.Keywords: complex networks, correlations, earthquake, hazard assessment
Procedia PDF Downloads 2126491 Analyzing the Connection between Productive Structure and Communicable Diseases: An Econometric Panel Study
Authors: Julio Silva, Lia Hasenclever, Gilson G. Silva Jr.
Abstract:
The aim of this paper is to check possible convergence in health measures (aged-standard rate of morbidity and mortality) for communicable diseases between developed and developing countries, conditional to productive structures features. Understanding the interrelations between health patterns and economic development is particularly important in the context of low- and middle-income countries, where economic development comes along with deep social inequality. Developing countries with less diversified productive structures (measured through complexity index) but high heterogeneous inter-sectorial labor productivity (using as a proxy inter-sectorial coefficient of variation of labor productivity) has on average low health levels in communicable diseases compared to developed countries with high diversified productive structures and low labor market heterogeneity. Structural heterogeneity and productive diversification may have influence on health levels even considering per capita income. We set up a panel data for 139 countries from 1995 to 2015, joining several data about the countries, as economic development, health, and health system coverage, environmental and socioeconomic aspects. This information was obtained from World Bank, International Labour Organization, Atlas of Economic Complexity, United Nation (Development Report) and Institute for Health Metrics and Evaluation Database. Econometric panel models evidence shows that the level of communicable diseases has a positive relationship with structural heterogeneity, even considering other factors as per capita income. On the other hand, the recent process of convergence in terms of communicable diseases have been motivated for other reasons not directly related to productive structure, as health system coverage and environmental aspects. These evidences suggest a joint dynamics between the unequal distribution of communicable diseases and countries' productive structure aspects. These set of evidence are quite important to public policy as meet the health aims in Millennium Development Goals. It also highlights the importance of the process of structural change as fundamental to shift the levels of health in terms of communicable diseases and can contribute to the debate between the relation of economic development and health patterns changes.Keywords: economic development, inequality, population health, structural change
Procedia PDF Downloads 1446490 Measuring Human Perception and Negative Elements of Public Space Quality Using Deep Learning: A Case Study of Area within the Inner Road of Tianjin City
Authors: Jiaxin Shi, Kaifeng Hao, Qingfan An, Zeng Peng
Abstract:
Due to a lack of data sources and data processing techniques, it has always been difficult to quantify public space quality, which includes urban construction quality and how it is perceived by people, especially in large urban areas. This study proposes a quantitative research method based on the consideration of emotional health and physical health of the built environment. It highlights the low quality of public areas in Tianjin, China, where there are many negative elements. Deep learning technology is then used to measure how effectively people perceive urban areas. First, this work suggests a deep learning model that might simulate how people can perceive the quality of urban construction. Second, we perform semantic segmentation on street images to identify visual elements influencing scene perception. Finally, this study correlated the scene perception score with the proportion of visual elements to determine the surrounding environmental elements that influence scene perception. Using a small-scale labeled Tianjin street view data set based on transfer learning, this study trains five negative spatial discriminant models in order to explore the negative space distribution and quality improvement of urban streets. Then it uses all Tianjin street-level imagery to make predictions and calculate the proportion of negative space. Visualizing the spatial distribution of negative space along the Tianjin Inner Ring Road reveals that the negative elements are mainly found close to the five key districts. The map of Tianjin was combined with the experimental data to perform the visual analysis. Based on the emotional assessment, the distribution of negative materials, and the direction of street guidelines, we suggest guidance content and design strategy points of the negative phenomena in Tianjin street space in the two dimensions of perception and substance. This work demonstrates the utilization of deep learning techniques to understand how people appreciate high-quality urban construction, and it complements both theory and practice in urban planning. It illustrates the connection between human perception and the actual physical public space environment, allowing researchers to make urban interventions.Keywords: human perception, public space quality, deep learning, negative elements, street images
Procedia PDF Downloads 1156489 Impacts of Present and Future Climate Variability on Forest Ecosystem in Mediterranean Region
Authors: Orkan Ozcan, Nebiye Musaoglu, Murat Turkes
Abstract:
Climate change is largely recognized as one of the real, pressing and significant global problems. The concept of ‘climate change vulnerability’ helps us to better comprehend the cause/effect relationships behind climate change and its impact on human societies, socioeconomic sectors, physiographical and ecological systems. In this study, multifactorial spatial modeling was applied to evaluate the vulnerability of a Mediterranean forest ecosystem to climate change. As a result, the geographical distribution of the final Environmental Vulnerability Areas (EVAs) of the forest ecosystem is based on the estimated final Environmental Vulnerability Index (EVI) values. This revealed that at current levels of environmental degradation, physical, geographical, policy enforcement and socioeconomic conditions, the area with a ‘very low’ vulnerability degree covered mainly the town, its surrounding settlements and the agricultural lands found mainly over the low and flat travertine plateau and the plains at the east and southeast of the district. The spatial magnitude of the EVAs over the forest ecosystem under the current environmental degradation was also determined. This revealed that the EVAs classed as ‘very low’ account for 21% of the total area of the forest ecosystem, those classed as ‘low’ account for 36%, those classed as ‘medium’ account for 20%, and those classed as ‘high’ account for 24%. Based on regionally averaged future climate assessments and projected future climate indicators, both the study site and the western Mediterranean sub-region of Turkey will probably become associated with a drier, hotter, more continental and more water-deficient climate. This analysis holds true for all future scenarios, with the exception of RCP4.5 for the period from 2015 to 2030. However, the present dry-sub humid climate dominating this sub-region and the study area shows a potential for change towards more dry climatology and for it to become a semiarid climate in the period between 2031 and 2050 according to the RCP8.5 high emission scenario. All the observed and estimated results and assessments summarized in the study show clearly that the densest forest ecosystem in the southern part of the study site, which is characterized by mainly Mediterranean coniferous and some mixed forest and the maquis vegetation, will very likely be influenced by medium and high degrees of vulnerability to future environmental degradation, climate change and variability.Keywords: forest ecosystem, Mediterranean climate, RCP scenarios, vulnerability analysis
Procedia PDF Downloads 3536488 Colloids and Heavy Metals in Groundwaters: Tangential Flow Filtration Method for Study of Metal Distribution on Different Sizes of Colloids
Authors: Jiancheng Zheng
Abstract:
When metals are released into water from mining activities, they undergo changes chemically, physically and biologically and then may become more mobile and transportable along the waterway from their original sites. Natural colloids, including both organic and inorganic entities, are naturally occurring in any aquatic environment with sizes in the nanometer range. Natural colloids in a water system play an important role, quite often a key role, in binding and transporting compounds. When assessing and evaluating metals in natural waters, their sources, mobility, fate, and distribution patterns in the system are the major concerns from the point of view of assessing environmental contamination and pollution during resource development. There are a few ways to quantify colloids and accordingly study how metals distribute on different sizes of colloids. Current research results show that the presence of colloids can enhance the transport of some heavy metals in water, while heavy metals may also have an influence on the transport of colloids when cations in the water system change colloids and/or the ion strength of the water system changes. Therefore, studies into the relationship between different sizes of colloids and different metals in a water system are necessary and needed as natural colloids in water systems are complex mixtures of both organic and inorganic as well as biological materials. Their stability could be sensitive to changes in their shapes, phases, hardness and functionalities due to coagulation and deposition et al. and chemical, physical, and biological reactions. Because metal contaminants’ adsorption on surfaces of colloids is closely related to colloid properties, it is desired to fraction water samples as soon as possible after a sample is taken in the natural environment in order to avoid changes to water samples during transportation and storage. For this reason, this study carried out groundwater sample processing in the field, using Prep/Scale tangential flow filtration systems with 3-level cartridges (1 kDa, 10 kDa and 100 kDa). Groundwater samples from seven sites at Fort MacMurray, Alberta, Canada, were fractionated during the 2015 field sampling season. All samples were processed within 3 hours after samples were taken. Preliminary results show that although the distribution pattern of metals on colloids may vary with different samples taken from different sites, some elements often tend to larger colloids (such as Fe and Re), some to finer colloids (such as Sb and Zn), while some of them mainly in the dissolved form (such as Mo and Be). This information is useful to evaluate and project the fate and mobility of different metals in the groundwaters and possibly in environmental water systems.Keywords: metal, colloid, groundwater, mobility, fractionation, sorption
Procedia PDF Downloads 3636487 An Investigation of Performance Versus Security in Cognitive Radio Networks with Supporting Cloud Platforms
Authors: Kurniawan D. Irianto, Demetres D. Kouvatsos
Abstract:
The growth of wireless devices affects the availability of limited frequencies or spectrum bands as it has been known that spectrum bands are a natural resource that cannot be added. Many studies about available spectrum have been done and it shows that licensed frequencies are idle most of the time. Cognitive radio is one of the solutions to solve those problems. Cognitive radio is a promising technology that allows the unlicensed users known as secondary users (SUs) to access licensed bands without making interference to licensed users or primary users (PUs). As cloud computing has become popular in recent years, cognitive radio networks (CRNs) can be integrated with cloud platform. One of the important issues in CRNs is security. It becomes a problem since CRNs use radio frequencies as a medium for transmitting and CRNs share the same issues with wireless communication systems. Another critical issue in CRNs is performance. Security has adverse effect to performance and there are trade-offs between them. The goal of this paper is to investigate the performance related to security trade-off in CRNs with supporting cloud platforms. Furthermore, Queuing Network Models with preemptive resume and preemptive repeat identical priority are applied in this project to measure the impact of security to performance in CRNs with or without cloud platform. The generalized exponential (GE) type distribution is used to reflect the bursty inter-arrival and service times at the servers. The results show that the best performance is obtained when security is disable and cloud platform is enable.Keywords: performance vs. security, cognitive radio networks, cloud platforms, GE-type distribution
Procedia PDF Downloads 3466486 Interlinkages and Impacts of the Indian Ocean on the Nile River
Authors: Zeleke Ayalew Alemu
Abstract:
Indian Ocean and the Nile River play significant roles in shaping the hydrological and ecological systems of the regions they traverse. This study explores the interlinkages and impacts of the Indian Ocean on the Nile River, highlighting key factors such as water flow, nutrient distribution, climate patterns, and biodiversity. The Indian Ocean serves as a major source of moisture for the Nile River, contributing to its annual flood cycle and sustaining the river's ecosystem. The Indian Ocean's monsoon winds influence the amount of rainfall received in East Africa, which directly impacts the Nile's water levels. These monsoonal patterns create a vital connection between the Indian Ocean and the Nile, affecting agricultural productivity, freshwater availability, and overall river health. The Indian Ocean also influences the nutrient levels in the Nile River. Coastal upwelling driven by oceanic currents brings nutrient-rich waters from the depths of the ocean to the surface. These nutrients are transported by ocean currents towards the Red Sea and subsequently enter the Nile. This influx of nutrients supports the growth of plankton, which forms the basis of the river's food web and sustains various aquatic species. Additionally, the Indian Ocean's climate patterns, such as El Niño and Indian Ocean Dipole events, exert influence on the Nile River basin. El Niño, for example, can result in drought conditions, reduced precipitation, and altered river flows, impacting agricultural activities and water resource management along the Nile. The Indian Ocean Dipole events can influence the rainfall distribution in East Africa, further impacting the Nile's water levels and ecosystem dynamics. The Indian Ocean's biodiversity is interconnected with the Nile River's ecological system. Many species that inhabit the Indian Ocean, such as migratory birds and marine mammals, migrate along the Nile River basin, utilizing its resources for feeding and breeding purposes. The health of the Indian Ocean's ecosystem thus indirectly affects the biodiversity and ecological balance of the Nile River. Indian Ocean plays a crucial role in shaping the dynamics of the Nile River. Its influence on water flow, nutrient distribution, climate patterns, and biodiversity highlights the complex interdependencies between these two important water bodies. Understanding the interconnectedness and impacts of the Indian Ocean on the Nile is essential for effective water resource management and conservation efforts in the region.Keywords: water, management, environment, planning
Procedia PDF Downloads 986485 Business Continuity Risk Review for a Large Petrochemical Complex
Authors: Michel A. Thomet
Abstract:
A discrete-event simulation model was used to perform a Reliability-Availability-Maintainability (RAM) study of a large petrochemical complex which included sixteen process units, and seven feeds and intermediate streams. All the feeds and intermediate streams have associated storage tanks, so that if a processing unit fails and shuts down, the downstream units can keep producing their outputs. This also helps the upstream units which do not have to reduce their outputs, but can store their excess production until the failed unit restart. Each process unit and each pipe section carrying the feeds and intermediate streams has a probability of failure with an associated distribution and a Mean Time Between Failure (MTBF), as well as a distribution of the time to restore and a Mean Time To Restore (MTTR). The utilities supporting the process units can also fail and have their own distributions with specific MTBF and MTTR. The model runs are for ten years or more and the runs are repeated several times to obtain statistically relevant results. One of the main results is the On-Stream factor (OSF) of each process unit (percent of hours in a year when the unit is running in nominal conditions). One of the objectives of the study was to investigate if the storage capacity of each of the feeds and the intermediate stream was adequate. This was done by increasing the storage capacities in several steps and through running the simulation to see if the OSF were improved and by how much. Other objectives were to see if the failure of the utilities were an important factor in the overall OSF, and what could be done to reduce their failure rates through redundant equipment.Keywords: business continuity, on-stream factor, petrochemical, RAM study, simulation, MTBF
Procedia PDF Downloads 2196484 Deconstructing Reintegration Services for Survivors of Human Trafficking: A Feminist Analysis of Australian and Thai Government and Non-Government Responses
Authors: Jessica J. Gillies
Abstract:
Awareness of the tragedy that is human trafficking has increased exponentially over the past two decades. The four pillars widely recognised as global solutions to the problem are prevention, prosecution, protection, and partnership between government and non-government organisations. While ‘sex-trafficking’ initially received major attention, this focus has shifted to other industries that conceal broader experiences of exploitation. However, within the regions of focus for this study, namely Australia and Thailand, trafficking for the purpose of sexual exploitation remains the commonly uncovered narrative of criminal justice investigations. In these regions anti-trafficking action is characterised by government-led prevention and prosecution efforts; whereas protection and reintegration practices have received criticism. Typically, non-government organisations straddle the critical chasm between policy and practice; therefore, they are perfectly positioned to contribute valuable experiential knowledge toward understanding how both sectors can support survivors in the post-trafficking experience. The aim of this research is to inform improved partnerships throughout government and non-government post-trafficking services by illuminating gaps in protection and reintegration initiatives. This research will explore government and non-government responses to human trafficking in Thailand and Australia, in order to understand how meaning is constructed in this context and how the construction of meaning effects survivors in the post-trafficking experience. A qualitative, three-stage methodology was adopted for this study. The initial stage of enquiry consisted of a discursive analysis, in order to deconstruct the broader discourses surrounding human trafficking. The data included empirical papers, grey literature such as publicly available government and non-government reports, and anti-trafficking policy documents. The second and third stages of enquiry will attempt to further explore the findings of the discourse analysis and will focus more specifically on protection and reintegration in Australia and Thailand. Stages two and three will incorporate process observations in government and non-government survivor support services, and semi-structured interviews with employees and volunteers within these settings. Two key findings emerged from the discursive analysis. The first exposed conflicting feminist arguments embedded throughout anti-trafficking discourse. Informed by conflicting feminist discourses on sex-work, a discursive relationship has been constructed between sex-industry policy and anti-trafficking policy. In response to this finding, data emerging from the process observations and semi-structured interviews will be interpreted using a feminist theoretical framework. The second finding progresses from the construction in the first. The discursive construction of sex-trafficking appears to have had influence over perceptions of the legitimacy of survivors, and therefore the support they receive in the post-trafficking experience. For example; women who willingly migrate for employment in the sex-industry, and on arrival are faced with exploitative conditions, are not perceived to be deserving of the same support as a woman who is not coerced, but rather physically forced, into such circumstances, yet both meet the criteria for a victim of human trafficking. The forthcoming study is intended to contribute toward building knowledge and understanding around the implications of the construction of legitimacy; and contextualise this in reference to government led protection and reintegration support services for survivors in the post-trafficking experience.Keywords: Australia, government, human trafficking, non-government, reintegration, Thailand
Procedia PDF Downloads 1126483 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing
Authors: S. Bouhouche, R. Drai, J. Bast
Abstract:
This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement
Procedia PDF Downloads 283