Search results for: fuzzy model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16840

Search results for: fuzzy model

9250 Transgenerational Impact of Intrauterine Hyperglycaemia to F2 Offspring without Pre-Diabetic Exposure on F1 Male Offspring

Authors: Jun Ren, Zhen-Hua Ming, He-Feng Huang, Jian-Zhong Sheng

Abstract:

Adverse intrauterine stimulus during critical or sensitive periods in early life, may lead to health risk not only in later life span, but also further generations. Intrauterine hyperglycaemia, as a major feature of gestational diabetes mellitus (GDM), is a typical adverse environment for both F1 fetus and F1 gamete cells development. However, there is scare information of phenotypic difference of metabolic memory between somatic cells and germ cells exposed by intrauterine hyperglycaemia. The direct transmission effect of intrauterine hyperglycaemia per se has not been assessed either. In this study, we built a GDM mice model and selected male GDM offspring without pre-diabetic phenotype as our founders, to exclude postnatal diabetic influence on gametes, thereby investigate the direct transmission effect of intrauterine hyperglycaemia exposure on F2 offspring, and we further compared the metabolic difference of affected F1-GDM male offspring and F2 offspring. A GDM mouse model of intrauterine hyperglycemia was established by intraperitoneal injection of streptozotocin after pregnancy. Pups of GDM mother were fostered by normal control mothers. All the mice were fed with standard food. Male GDM offspring without metabolic dysfunction phenotype were crossed with normal female mice to obtain F2 offspring. Body weight, glucose tolerance test, insulin tolerance test and homeostasis model of insulin resistance (HOMA-IR) index were measured in both generations at 8 week of age. Some of F1-GDM male mice showed impaired glucose tolerance (p < 0.001), none of F1-GDM male mice showed impaired insulin sensitivity. Body weight of F1-GDM mice showed no significance with control mice. Some of F2-GDM offspring exhibited impaired glucose tolerance (p < 0.001), all the F2-GDM offspring exhibited higher HOMA-IR index (p < 0.01 of normal glucose tolerance individuals vs. control, p < 0.05 of glucose intolerance individuals vs. control). All the F2-GDM offspring exhibited higher ITT curve than control (p < 0.001 of normal glucose tolerance individuals, p < 0.05 of glucose intolerance individuals, vs. control). F2-GDM offspring had higher body weight than control mice (p < 0.001 of normal glucose tolerance individuals, p < 0.001 of glucose intolerance individuals, vs. control). While glucose intolerance is the only phenotype that F1-GDM male mice may exhibit, F2 male generation of healthy F1-GDM father showed insulin resistance, increased body weight and/or impaired glucose tolerance. These findings imply that intrauterine hyperglycaemia exposure affects germ cells and somatic cells differently, thus F1 and F2 offspring demonstrated distinct metabolic dysfunction phenotypes. And intrauterine hyperglycaemia exposure per se has a strong influence on F2 generation, independent of postnatal metabolic dysfunction exposure.

Keywords: inheritance, insulin resistance, intrauterine hyperglycaemia, offspring

Procedia PDF Downloads 229
9249 Triangular Libration Points in the R3bp under Combined Effects of Oblateness, Radiation and Power-Law Profile

Authors: Babatunde James Falaye, Shi Hai Dong, Kayode John Oyewumi

Abstract:

We study the e ffects of oblateness up to J4 of the primaries and power-law density pro file (PDP) on the linear stability of libration location of an in nitesimal mass within the framework of restricted three body problem (R3BP), by using a more realistic model in which a disc with PDP is rotating around the common center of the system mass with perturbed mean motion. The existence and stability of triangular equilibrium points have been explored. It has been shown that triangular equilibrium points are stable for 0 < μ < μc and unstable for μc ≤ μ ≤ 1/2, where c denotes the critical mass parameter. We find that, the oblateness up to J2 of the primaries and the radiation reduces the stability range while the oblateness up to J4 of the primaries increases the size of stability both in the context where PDP is considered and ignored. The PDP has an e ect of about ≈0:01 reduction on the application of c to Earth-Moon and Jupiter-Moons systems. We find that the comprehensive eff ects of the perturbations have a stabilizing proclivity. However, the oblateness up to J2 of the primaries and the radiation of the primaries have tendency for instability, while coecients up to J4 of the primaries have stability predisposition. In the limiting case c = 0, and also by setting appropriate parameter(s) to zero, our results are in excellent agreement with the ones obtained previously. Libration points play a very important role in space mission and as a consequence, our results have a practical application in space dynamics and related areas. The model may be applied to study the navigation and station-keeping operations of spacecraft (in nitesimal mass) around the Jupiter (more massive) -Callisto (less massive) system, where PDP accounts for the circumsolar ring of asteroidal dust, which has a cloud of dust permanently in its wake.

Keywords: libration points, oblateness, power-law density profile, restricted three-body problem

Procedia PDF Downloads 305
9248 The Impact of an Improved Strategic Partnership Programme on Organisational Performance and Growth of Firms in the Internet Protocol Television and Hybrid Fibre-Coaxial Broadband Industry

Authors: Collen T. Masilo, Brane Semolic, Pieter Steyn

Abstract:

The Internet Protocol Television (IPTV) and Hybrid Fibre-Coaxial (HFC) Broadband industrial sector landscape are rapidly changing and organisations within the industry need to stay competitive by exploring new business models so that they can be able to offer new services and products to customers. The business challenge in this industrial sector is meeting or exceeding high customer expectations across multiple content delivery modes. The increasing challenges in the IPTV and HFC broadband industrial sector encourage service providers to form strategic partnerships with key suppliers, marketing partners, advertisers, and technology partners. The need to form enterprise collaborative networks poses a challenge for any organisation in this sector, in selecting the right strategic partners who will ensure that the organisation’s services and products are marketed in new markets. Partners who will ensure that customers are efficiently supported by meeting and exceeding their expectations. Lastly, selecting cooperation partners who will represent the organisation in a positive manner, and contribute to improving the performance of the organisation. Companies in the IPTV and HFC broadband industrial sector tend to form informal partnerships with suppliers, vendors, system integrators and technology partners. Generally, partnerships are formed without thorough analysis of the real reason a company is forming collaborations, without proper evaluations of prospective partners using specific selection criteria, and with ineffective performance monitoring of partners to ensure that a firm gains real long term benefits from its partners and gains competitive advantage. Similar tendencies are illustrated in the research case study and are based on Skyline Communications, a global leader in end-to-end, multi-vendor network management and operational support systems (OSS) solutions. The organisation’s flagship product is the DataMiner network management platform used by many operators across multiple industries and can be referred to as a smart system that intelligently manages complex technology ecosystems for its customers in the IPTV and HFC broadband industry. The approach of the research is to develop the most efficient business model that can be deployed to improve a strategic partnership programme in order to significantly improve the performance and growth of organisations participating in a collaborative network in the IPTV and HFC broadband industrial sector. This involves proposing and implementing a new strategic partnership model and its main features within the industry which should bring about significant benefits for all involved companies to achieve value add and an optimal growth strategy. The proposed business model has been developed based on the research of existing relationships, value chains and business requirements in this industrial sector and validated in 'Skyline Communications'. The outputs of the business model have been demonstrated and evaluated in the research business case study the IPTV and HFC broadband service provider 'Skyline Communications'.

Keywords: growth, partnership, selection criteria, value chain

Procedia PDF Downloads 115
9247 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm

Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy

Abstract:

IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.

Keywords: IoT, fog networks, data stewardship, dynamic access policy

Procedia PDF Downloads 42
9246 Numerical Investigation of Multiphase Flow Structure for the Flue Gas Desulfurization

Authors: Cheng-Jui Li, Chien-Chou Tseng

Abstract:

This study adopts Computational Fluid Dynamics (CFD) technique to build the multiphase flow numerical model where the interface between the flue gas and desulfurization liquid can be traced by Eulerian-Eulerian model. Inside the tower, the contact of the desulfurization liquid flow from the spray nozzles and flue gas flow can trigger chemical reactions to remove the sulfur dioxide from the exhaust gas. From experimental observations of the industrial scale plant, the desulfurization mechanism depends on the mixing level between the flue gas and the desulfurization liquid. In order to significantly improve the desulfurization efficiency, the mixing efficiency and the residence time can be increased by perforated sieve trays. Hence, the purpose of this research is to investigate the flow structure of sieve trays for the flue gas desulfurization by numerical simulation. In this study, there is an outlet at the top of FGD tower to discharge the clean gas and the FGD tower has a deep tank at the bottom, which is used to collect the slurry liquid. In the major desulfurization zone, the desulfurization liquid and flue gas have a complex mixing flow. Because there are four perforated plates in the major desulfurization zone, which spaced 0.4m from each other, and the spray array is placed above the top sieve tray, which includes 33 nozzles. Each nozzle injects desulfurization liquid that consists of the Mg(OH)2 solution. On each sieve tray, the outside diameter, the hole diameter, and the porosity are 0.6m, 20 mm and 34.3%. The flue gas flows into the FGD tower from the space between the major desulfurization zone and the deep tank can finally become clean. The desulfurization liquid and the liquid slurry goes to the bottom tank and is discharged as waste. When the desulfurization solution flow impacts the sieve tray, the downward momentum will be converted to the upper surface of the sieve tray. As a result, a thin liquid layer can be developed above the sieve tray, which is the so-called the slurry layer. And the volume fraction value within the slurry layer is around 0.3~0.7. Therefore, the liquid phase can't be considered as a discrete phase under the Eulerian-Lagrangian framework. Besides, there is a liquid column through the sieve trays. The downward liquid column becomes narrow as it interacts with the upward gas flow. After the flue gas flows into the major desulfurization zone, the flow direction of the flue gas is upward (+y) in the tube between the liquid column and the solid boundary of the FGD tower. As a result, the flue gas near the liquid column may be rolled down to slurry layer, which developed a vortex or a circulation zone between any two sieve trays. The vortex structure between two sieve trays results in a sufficient large two-phase contact area. It also increases the number of times that the flue gas interacts with the desulfurization liquid. On the other hand, the sieve trays improve the two-phase mixing, which may improve the SO2 removal efficiency.

Keywords: Computational Fluid Dynamics (CFD), Eulerian-Eulerian Model, Flue Gas Desulfurization (FGD), perforated sieve tray

Procedia PDF Downloads 270
9245 Localization of Pyrolysis and Burning of Ground Forest Fires

Authors: Pavel A. Strizhak, Geniy V. Kuznetsov, Ivan S. Voytkov, Dmitri V. Antonov

Abstract:

This paper presents the results of experiments carried out at a specialized test site for establishing macroscopic patterns of heat and mass transfer processes at localizing model combustion sources of ground forest fires with the use of barrier lines in the form of a wetted lay of material in front of the zone of flame burning and thermal decomposition. The experiments were performed using needles, leaves, twigs, and mixtures thereof. The dimensions of the model combustion source and the ranges of heat release correspond well to the real conditions of ground forest fires. The main attention is paid to the complex analysis of the effect of dispersion of water aerosol (concentration and size of droplets) used to form the barrier line. It is shown that effective conditions for localization and subsequent suppression of flame combustion and thermal decomposition of forest fuel can be achieved by creating a group of barrier lines with different wetting width and depth of the material. Relative indicators of the effectiveness of one and combined barrier lines were established, taking into account all the main characteristics of the processes of suppressing burning and thermal decomposition of forest combustible materials. We performed the prediction of the necessary and sufficient parameters of barrier lines (water volume, width, and depth of the wetted lay of the material, specific irrigation density) for combustion sources with different dimensions, corresponding to the real fire extinguishing practice.

Keywords: forest fire, barrier water lines, pyrolysis front, flame front

Procedia PDF Downloads 116
9244 Analyzing the Support to Fisheries in the European Union: Modelling Budgetary Transfers in Wild Fisheries

Authors: Laura Angulo, Petra Salamon, Martin Banse, Frederic Storkamp

Abstract:

Fisheries subsidies are focus on reduce management costs or deliver income benefits to fishers. In 2015, total fishery budgetary transfers in 31 OECD countries represented 35% of their total landing value. However, subsidies to fishing have adverse effects on trade and it has been claimed that they may contribute directly to overfishing. Therefore, this paper analyses to what extend fisheries subsidies may 1) influence capture production facing quotas and 2) affect price dynamics. The study uses the fish module in AGMEMOD (Agriculture Member States Modelling, details see Chantreuil et al. (2012)) which covers eight fish categories (cephalopods; crustaceans; demersal marine fish; pelagic marine fish; molluscs excl. cephalopods; other marine finfish species; freshwater and diadromous fish) for EU member states and other selected countries developed under the SUCCESS project. This model incorporates transfer payments directly linked to fisheries operational costs. As aquaculture and wild fishery are not included within the WTO Agreement on Agriculture, data on fisheries subsidies is obtained from the OECD Fisheries Support Estimates (FSE) database, which provides statistics on budgetary transfers to the fisheries sector. Since support has been moving from budgetary transfers to General Service Support Estimate the last years, subsidies in capture production may not present substantial effects. Nevertheless, they would still show the impact across countries and fish categories within the European Union.

Keywords: AGMEMOD, budgetary transfers, EU Member States, fish model, fisheries support estimate

Procedia PDF Downloads 233
9243 Public Debt Shocks and Public Goods Provisioning in Nigeria: Implication for National Development

Authors: Amenawo I. Offiong, Hodo B. Riman

Abstract:

Public debt profile of Nigeria has continuously been on the increase over the years. The drop in international crude oil prices has further worsened revenue position of the country, thus, necessitating further acquisition of public debt to bridge the gap in revenue deficit. Yet, when we look back at the increasing public sector spending, there are concerns that the government spending do not amount to increase in public goods provided for the country. Using data from 1980 to 2014 the study therefore seeks to investigate the factors responsible for the poor provision of public goods in the face of increasing public debt profile. Using the unrestricted VAR model Governance and Tax revenue were introduced into the model as structural variables. The result suggested that governance and tax revenue were structural determinants of the effectiveness of public goods provisioning in Nigeria. The study therefore identified weak governance as the major reason for the non-provision of public goods in Nigeria. While tax revenue exerted positive influence on the provisions of public goods, weak/poor governance was observed to crowd the benefits from increase tax revenue. The study therefore recommends reappraisal of the governance system in Nigeria. Elected officers in governance should be more transparent and accountable to the electorates they represent. Furthermore, the study advocates for an annual auditing of all government MDAs accounts by external auditors to ensure (a) accountability of public debts utilization, (b) transparent in implementation of program support funds, (c) integrity of agencies responsible for program management, and (d) measuring program effectiveness with amount of funds expended.

Keywords: impulse response function, public debt shocks, governance, public goods, tax revenue, vector auto-regression

Procedia PDF Downloads 241
9242 Defining Priority Areas for Biodiversity Conservation to Support for Zoning Protected Areas: A Case Study from Vietnam

Authors: Xuan Dinh Vu, Elmar Csaplovics

Abstract:

There has been an increasing need for methods to define priority areas for biodiversity conservation since the effectiveness of biodiversity conservation in protected areas largely depends on the availability of material resources. The identification of priority areas requires the integration of biodiversity data together with social data on human pressures and responses. However, the deficit of comprehensive data and reliable methods becomes a key challenge in zoning where the demand for conservation is most urgent and where the outcomes of conservation strategies can be maximized. In order to fill this gap, the study applied an environmental model Condition–Pressure–Response to suggest a set of criteria to identify priority areas for biodiversity conservation. Our empirical data has been compiled from 185 respondents, categorizing into three main groups: governmental administration, research institutions, and protected areas in Vietnam by using a well - designed questionnaire. Then, the Analytic Hierarchy Process (AHP) theory was used to identify the weight of all criteria. Our results have shown that priority level for biodiversity conservation could be identified by three main indicators: condition, pressure, and response with the value of the weight of 26%, 41%, and 33%, respectively. Based on the three indicators, 7 criteria and 15 sub-criteria were developed to support for defining priority areas for biodiversity conservation and zoning protected areas. In addition, our study also revealed that the groups of governmental administration and protected areas put a focus on the 'Pressure' indicator while the group of Research Institutions emphasized the importance of 'Response' indicator in the evaluation process. Our results provided recommendations to apply the developed criteria for identifying priority areas for biodiversity conservation in Vietnam.

Keywords: biodiversity conservation, condition–pressure–response model, criteria, priority areas, protected areas

Procedia PDF Downloads 146
9241 Views from Shores Past: Palaeogeographic Reconstructions as an Aid for Interpreting the Movement of Early Modern Humans on and between the Islands of Wallacea

Authors: S. Kealy, J. Louys, S. O’Connor

Abstract:

The island archipelago that stretches between the continents of Sunda (Southeast Asia) and Sahul (Australia - New Guinea) and comprising much of modern-day Indonesia as well as Timor-Leste, represents the biogeographic region of Wallacea. The islands of Wallaea are significant archaeologically as they have never been connected to the mainlands of either Sunda or Sahul, and thus the colonization by early modern humans of these islands and subsequently Australia and New Guinea, would have necessitated some form of water crossings. Accurate palaeogeographic reconstructions of the Wallacean Archipelago for this time are important not only for modeling likely routes of colonization but also for reconstructing likely landscapes and hence resources available to the first colonists. Here we present five digital reconstructions of coastal outlines of Wallacea and Sahul (Australia and New Guinea) for the periods 65, 60, 55, 50, and 45,000 years ago using the latest bathometric chart and a sea-level model that is adjusted to account for the average uplift rate known from Wallacea. This data was also used to reconstructed island areal extent as well as topography for each time period. These reconstructions allowed us to determine the distance from the coast and relative elevation of the earliest archaeological sites for each island where such records exist. This enabled us to approximate how much effort exploitation of coastal resources would have taken for early colonists, and how important such resources were. These reconstructions also allowed us to estimate visibility for each island in the archipelago, and to model how intervisible each island was during the period of likely human colonisation. We demonstrate how these models provide archaeologists with an important basis for visualising this ancient landscape and interpreting how it was originally viewed, traversed and exploited by its earliest modern human inhabitants.

Keywords: Wallacea, palaeogeographic reconstructions, islands, intervisibility

Procedia PDF Downloads 186
9240 The Differences and the Similarities between Corporate Governance Principles in Islamic Banks and Conventional Banks

Authors: Osama Shibani

Abstract:

Corporate governance effective is critical to the proper functioning of the banking sector and the economy as a whole, the Basel Committee have issued principles of corporate governance inspired from Organisation for Economic Co-operation and Development (OECD), but there is no single model of corporate governance that can work well in every country; each country, or even each organization should develop its own model that can cater for its specific needs and objectives, the corporate governance in Islamic Institutions is unique and offers a particular structure and guided by a control body which is Shariah supervisory Board (SSB), for this reason Islamic Financial Services Board in Malaysia (IFSB) has amended BCBS corporate governance principles commensurate with Islamic financial Institutions to suit the nature of the work of Islamic institutions, this paper highlight these amended by using comparative analysis method in context of the differences of corporate governance structure of Islamic banks and conventional banks. We find few different between principles (Principle 1: The Board's overall responsibilities, Principles 3: Board’s own structure and practices, Principles 9: Compliance, Principle 10: Internal audit, Principle 12: Disclosure and transparency) and there are similarities between principles (Principle 2: Board qualifications and composition, Principles 4: Senior Management (composition and tasks), Principle 6: Risk Management and Principle 8: Risk communication). Finally, we found that corporate governance principles issued by Islamic Financial Services Board (IFSB) are complemented to CG principles of Basel Committee on Banking Supervision (BCBS) with some modifications to suit the composition of Islamic banks, there are deficiencies in the interest of the Basel Committee to Islamic banks.

Keywords: basel committee (BCBS), corporate governance principles, Islamic financial services board (IFSB), agency theory

Procedia PDF Downloads 273
9239 A Review of Research on Pre-training Technology for Natural Language Processing

Authors: Moquan Gong

Abstract:

In recent years, with the rapid development of deep learning, pre-training technology for natural language processing has made great progress. The early field of natural language processing has long used word vector methods such as Word2Vec to encode text. These word vector methods can also be regarded as static pre-training techniques. However, this context-free text representation brings very limited improvement to subsequent natural language processing tasks and cannot solve the problem of word polysemy. ELMo proposes a context-sensitive text representation method that can effectively handle polysemy problems. Since then, pre-training language models such as GPT and BERT have been proposed one after another. Among them, the BERT model has significantly improved its performance on many typical downstream tasks, greatly promoting the technological development in the field of natural language processing, and has since entered the field of natural language processing. The era of dynamic pre-training technology. Since then, a large number of pre-trained language models based on BERT and XLNet have continued to emerge, and pre-training technology has become an indispensable mainstream technology in the field of natural language processing. This article first gives an overview of pre-training technology and its development history, and introduces in detail the classic pre-training technology in the field of natural language processing, including early static pre-training technology and classic dynamic pre-training technology; and then briefly sorts out a series of enlightening technologies. Pre-training technology, including improved models based on BERT and XLNet; on this basis, analyze the problems faced by current pre-training technology research; finally, look forward to the future development trend of pre-training technology.

Keywords: natural language processing, pre-training, language model, word vectors

Procedia PDF Downloads 33
9238 Structure Clustering for Milestoning Applications of Complex Conformational Transitions

Authors: Amani Tahat, Serdal Kirmizialtin

Abstract:

Trajectory fragment methods such as Markov State Models (MSM), Milestoning (MS) and Transition Path sampling are the prime choice of extending the timescale of all atom Molecular Dynamics simulations. In these approaches, a set of structures that covers the accessible phase space has to be chosen a priori using cluster analysis. Structural clustering serves to partition the conformational state into natural subgroups based on their similarity, an essential statistical methodology that is used for analyzing numerous sets of empirical data produced by Molecular Dynamics (MD) simulations. Local transition kernel among these clusters later used to connect the metastable states using a Markovian kinetic model in MSM and a non-Markovian model in MS. The choice of clustering approach in constructing such kernel is crucial since the high dimensionality of the biomolecular structures might easily confuse the identification of clusters when using the traditional hierarchical clustering methodology. Of particular interest, in the case of MS where the milestones are very close to each other, accurate determination of the milestone identity of the trajectory becomes a challenging issue. Throughout this work we present two cluster analysis methods applied to the cis–trans isomerism of dinucleotide AA. The choice of nucleic acids to commonly used proteins to study the cluster analysis is two fold: i) the energy landscape is rugged; hence transitions are more complex, enabling a more realistic model to study conformational transitions, ii) Nucleic acids conformational space is high dimensional. A diverse set of internal coordinates is necessary to describe the metastable states in nucleic acids, posing a challenge in studying the conformational transitions. Herein, we need improved clustering methods that accurately identify the AA structure in its metastable states in a robust way for a wide range of confused data conditions. The single linkage approach of the hierarchical clustering available in GROMACS MD-package is the first clustering methodology applied to our data. Self Organizing Map (SOM) neural network, that also known as a Kohonen network, is the second data clustering methodology. The performance comparison of the neural network as well as hierarchical clustering method is studied by means of computing the mean first passage times for the cis-trans conformational rates. Our hope is that this study provides insight into the complexities and need in determining the appropriate clustering algorithm for kinetic analysis. Our results can improve the effectiveness of decisions based on clustering confused empirical data in studying conformational transitions in biomolecules.

Keywords: milestoning, self organizing map, single linkage, structure clustering

Procedia PDF Downloads 207
9237 Experiment-Based Teaching Method for the Varying Frictional Coefficient

Authors: Mihaly Homostrei, Tamas Simon, Dorottya Schnider

Abstract:

The topic of oscillation in physics is one of the key ideas which is usually taught based on the concept of harmonic oscillation. It can be an interesting activity to deal with a frictional oscillator in advanced high school classes or in university courses. Its mechanics are investigated in this research, which shows that the motion of the frictional oscillator is more complicated than a simple harmonic oscillator. The physics of the applied model in this study seems to be interesting and useful for undergraduate students. The study presents a well-known physical system, which is mostly discussed theoretically in high school and at the university. The ideal frictional oscillator is normally used as an example of harmonic oscillatory motion, as its theory relies on the constant coefficient of sliding friction. The structure of the system is simple: a rod with a homogeneous mass distribution is placed on two rotating identical cylinders placed at the same height so that they are horizontally aligned, and they rotate at the same angular velocity, however in opposite directions. Based on this setup, one could easily show that the equation of motion describes a harmonic oscillation considering the magnitudes of the normal forces in the system as the function of the position and the frictional forces with a constant coefficient of frictions are related to them. Therefore, the whole description of the model relies on simple Newtonian mechanics, which is available for students even in high school. On the other hand, the phenomenon of the described frictional oscillator does not seem to be so straightforward after all; experiments show that the simple harmonic oscillation cannot be observed in all cases, and the system performs a much more complex movement, whereby the rod adjusts itself to a non-harmonic oscillation with a nonzero stable amplitude after an unconventional damping effect. The stable amplitude, in this case, means that the position function of the rod converges to a harmonic oscillation with a constant amplitude. This leads to the idea of a more complex model which can describe the motion of the rod in a more accurate way. The main difference to the original equation of motion is the concept that the frictional coefficient varies with the relative velocity. This dependence on the velocity was investigated in many different research articles as well; however, this specific problem could demonstrate the key concept of the varying friction coefficient and its importance in an interesting and demonstrative way. The position function of the rod is described by a more complicated and non-trivial, yet more precise equation than the usual harmonic oscillation description of the movement. The study discusses the structure of the measurements related to the frictional oscillator, the qualitative and quantitative derivation of the theory, and the comparison of the final theoretical function as well as the measured position-function in time. The project provides useful materials and knowledge for undergraduate students and a new perspective in university physics education.

Keywords: friction, frictional coefficient, non-harmonic oscillator, physics education

Procedia PDF Downloads 182
9236 Clustering for Detection of the Population at Risk of Anticholinergic Medication

Authors: A. Shirazibeheshti, T. Radwan, A. Ettefaghian, G. Wilson, C. Luca, Farbod Khanizadeh

Abstract:

Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature, which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on over 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. To further evaluate the performance of the model, any association between the average risk score within each group and other factors such as socioeconomic status (i.e., Index of Multiple Deprivation) and an index of health and disability were investigated. The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings also show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, indicating that females are more at risk from this kind of multiple medications. The risk may be monitored and controlled in well artificial intelligence-equipped healthcare management systems.

Keywords: anticholinergic medicines, clustering, deprivation, socioeconomic status

Procedia PDF Downloads 187
9235 Impact of Unusual Dust Event on Regional Climate in India

Authors: Kanika Taneja, V. K. Soni, Kafeel Ahmad, Shamshad Ahmad

Abstract:

A severe dust storm generated from a western disturbance over north Pakistan and adjoining Afghanistan affected the north-west region of India between May 28 and 31, 2014, resulting in significant reductions in air quality and visibility. The air quality of the affected region degraded drastically. PM10 concentration peaked at a very high value of around 1018 μgm-3 during dust storm hours of May 30, 2014 at New Delhi. The present study depicts aerosol optical properties monitored during the dust days using ground based multi-wavelength Sky radiometer over the National Capital Region of India. High Aerosol Optical Depth (AOD) at 500 nm was observed as 1.356 ± 0.19 at New Delhi while Angstrom exponent (Alpha) dropped to 0.287 on May 30, 2014. The variation in the Single Scattering Albedo (SSA) and real n(λ) and imaginary k(λ) parts of the refractive index indicated that the dust event influences the optical state to be more absorbing. The single scattering albedo, refractive index, volume size distribution and asymmetry parameter (ASY) values suggested that dust aerosols were predominant over the anthropogenic aerosols in the urban environment of New Delhi. The large reduction in the radiative flux at the surface level caused significant cooling at the surface. Direct Aerosol Radiative Forcing (DARF) was calculated using a radiative transfer model during the dust period. A consistent increase in surface cooling was evident, ranging from -31 Wm-2 to -82 Wm-2 and an increase in heating of the atmosphere from 15 Wm-2 to 92 Wm-2 and -2 Wm-2 to 10 Wm-2 at top of the atmosphere.

Keywords: aerosol optical properties, dust storm, radiative transfer model, sky radiometer

Procedia PDF Downloads 364
9234 Problem Solving Courts for Domestic Violence Offenders: Duluth Model Application in Spanish-Speaking Offenders

Authors: I. Salas-Menotti

Abstract:

Problem-solving courts were created to assist offenders with specific needs that were not addressed properly in traditional courts. Problem-solving courts' main objective is to pursue solutions that will benefit the offender, the victim, and society as well. These courts were developed as an innovative response to deal with issues such as drug abuse, mental illness, and domestic violence. In Brooklyn, men who are charged with domestic violence related offenses for the first time are offered plea bargains that include the attendance to a domestic abuse intervention program as a condition to dismiss the most serious charges and avoid incarceration. The desired outcome is that the offender will engage in a program that will modify his behavior avoiding new incidents of domestic abuse, it requires accountability towards the victim and finally, it will hopefully bring down statistic related to domestic abuse incidents. This paper will discuss the effectiveness of the Duluth model as applied to Spanish-speaking men mandated to participate in the program by the specialized domestic violence courts in Brooklyn. A longitudinal study was conducted with 243 Spanish- speaking men who were mandated to participated in the men's program offered by EAC in Brooklyn in the years 2016 through 2018 to determine the recidivism rate of domestic violence crimes. Results show that the recidivism rate was less than 5% per year after completing the program which indicates that the intervention is effective in preventing new abuse allegations and subsequent arrests. It's recommended that comparative study with English-speaking participants is conducted to determine cultural and language variables affecting the program's efficacy.

Keywords: domestic violence, domestic abuse intervention programs, Problem solving courts, Spanish-speaking offenders

Procedia PDF Downloads 112
9233 European Hinterland and Foreland: Impact of Accessibility, Connectivity, Inter-Port Competition on Containerization

Authors: Dial Tassadit Rania, Figueiredo De Oliveira Gabriel

Abstract:

In this paper, we investigate the relationship between ports and their hinterland and foreland environments and the competitive relationship between the ports themselves. These two environments are changing, evolving and introducing new challenges for commercial and economic development at the regional, national and international levels. Because of the rise of the containerization phenomenon, shipping costs and port handling costs have considerably decreased due to economies of scale. The volume of maritime trade has increased substantially and the markets served by the ports have expanded. On these bases, overlapping hinterlands can give rise to the phenomenon of competition between ports. Our main contribution comparing to the existing literature on this issue, is to build a set of hinterland, foreland and competition indicators. Using these indicators? we investigate the effect of hinterland accessibility, foreland connectivity and inter-ports competition on containerized traffic of Europeans ports. For this, we have a 10-year panel database from 2004 to 2014. Our hinterland indicators are given by two indicators of accessibility; they describe the market potential of a port and are calculated using information on population and wealth (GDP). We then calculate population and wealth for different neighborhoods within a distance from a port ranging from 100 to 1000km. For the foreland, we produce two indicators: port connectivity and number of partners for each port. Finally, we compute the two indicators of inter-port competition and a market concentration indicator (Hirshmann-Herfindhal) for different neighborhood-distances around the port. We then apply a fixed-effect model to test the relationship above. Again, with a fixed effects model, we do a sensitivity analysis for each of these indicators to support the results obtained. The econometric results of the general model given by the regression of the accessibility indicators, the LSCI for port i, and the inter-port competition indicator on the containerized traffic of European ports show a positive and significant effect for accessibility to wealth and not to the population. The results are positive and significant for the two indicators of connectivity and competition as well. One of the main results of this research is that the port development given here by the increase of its containerized traffic is strongly related to the development of its hinterland and foreland environment. In addition, it is the market potential, given by the wealth of the hinterland that has an impact on the containerized traffic of a port. However, accessibility to a large population pool is not important for understanding the dynamics of containerized port traffic. Furthermore, in order to continue to develop, a port must penetrate its hinterland at a deep level exceeding 100 km around the port and seek markets beyond this perimeter. The port authorities could focus their marketing efforts on the immediate hinterland, which can, as the results shows, not be captive and thus engage new approaches of port governance to make it more attractive.

Keywords: accessibility, connectivity, European containerization, European hinterland and foreland, inter-port competition

Procedia PDF Downloads 180
9232 Measuring Principal and Teacher Cultural Competency: A Need Assessment of Three Proximate PreK-5 Schools

Authors: Teresa Caswell

Abstract:

Throughout the United States and within a myriad of demographic contexts, students of color experience the results of systemic inequities as an academic outcome. These disparities continue despite the increased resources provided to students and ongoing instruction-focused professional learning received by teachers. The researcher postulated that lower levels of educator cultural competency are an underlying factor of why resource and instructional interventions are less effective than desired. Before implementing any type of intervention, however, cultural competency needed to be confirmed as a factor in schools demonstrating academic disparities between racial subgroups. A needs assessment was designed to measure levels of individual beliefs, including cultural competency, in both principals and teachers at three neighboring schools verified to have academic disparities. The resulting mixed method study utilized the Optimal Theory Applied to Identity Development (OTAID) model to measure cultural competency quantitatively, through self-identity inventory survey items, with teachers and qualitatively, through one-on-one interviews, with each school’s principal. A joint display was utilized to see combined data within and across school contexts. Each school was confirmed to have misalignments between principal and teacher levels of cultural competency beliefs while also indicating that a number of participants in the self-identity inventory survey may have intentionally skipped items referencing the term oppression. Additional use of the OTAID model and self-identity inventory in future research and across contexts is needed to determine transferability and dependability as cultural competency measures.

Keywords: cultural competency, identity development, mixed-method analysis, needs assessment

Procedia PDF Downloads 135
9231 Infrastructure Sharing Synergies: Optimal Capacity Oversizing and Pricing

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) deals with both substitution synergies (exchange of waste materials, fatal energy and utilities as resources for production) and infrastructure/service sharing synergies. The latter is based on the intensification of use of an asset and thus requires to balance capital costs increments with snowball effects (network externalities) for its implementation. Initial investors must specify ex-ante arrangements (cost sharing and pricing schedule) to commit toward investments in capacities and transactions. Our model investigate the decision of 2 actors trying to choose cooperatively a level of infrastructure capacity oversizing to set a plug-and-play offer to a potential entrant whose capacity requirement is randomly distributed while satisficing their own requirements. Capacity cost exhibits sub-additive property so that there is room for profitable overcapacity setting in the first period. The entrant’s willingness-to-pay for the access to the infrastructure is dependent upon its standalone cost and the capacity gap that it must complete in case the available capacity is insufficient ex-post (the complement cost). Since initial capacity choices are driven by ex-ante (expected) yield extractible from the entrant we derive the expected complement cost function which helps us defining the investors’ objective function. We first show that this curve is decreasing and convex in the capacity increments and that it is shaped by the distribution function of the potential entrant’s requirements. We then derive the general form of solutions and solve the model for uniform and triangular distributions. Depending on requirements volumes and cost assumptions different equilibria occurs. We finally analyze the effect of a per-unit subsidy a public actor would apply to foster such sharing synergies.

Keywords: capacity, cooperation, industrial symbiosis, pricing

Procedia PDF Downloads 198
9230 The Roman Fora in North Africa Towards a Supportive Protocol to the Decision for the Morphological Restitution

Authors: Dhouha Laribi Galalou, Najla Allani Bouhoula, Atef Hammouda

Abstract:

This research delves into the fundamental question of the morphological restitution of built archaeology in order to place it in its paradigmatic context and to seek answers to it. Indeed, the understanding of the object of the study, its analysis, and the methodology of solving the morphological problem posed, are manageable aspects only by means of a thoughtful strategy that draws on well-defined epistemological scaffolding. In this stream, the crisis of natural reasoning in archaeology has generated multiple changes in this field, ranging from the use of new tools to the integration of an archaeological information system where urbanization involves the interplay of several disciplines. The built archaeological topic is also an architectural and morphological object. It is also a set of articulated elementary data, the understanding of which is about to be approached from a logicist point of view. Morphological restitution is no exception to the rule, and the inter-exchange between the different disciplines uses the capacity of each to frame the reflection on the incomplete elements of a given architecture or on its different phases and multiple states of existence. The logicist sequence is furnished by the set of scattered or destroyed elements found, but also by what can be called a rule base which contains the set of rules for the architectural construction of the object. The knowledge base built from the archaeological literature also provides a reference that enters into the game of searching for forms and articulations. The choice of the Roman Forum in North Africa is justified by the great urban and architectural characteristics of this entity. The research on the forum involves both a fairly large knowledge base but also provides the researcher with material to study - from a morphological and architectural point of view - starting from the scale of the city down to the architectural detail. The experimentation of the knowledge deduced on the paradigmatic level, as well as the deduction of an analysis model, is then carried out on the basis of a well-defined context which contextualises the experimentation from the elaboration of the morphological information container attached to the rule base and the knowledge base. The use of logicist analysis and artificial intelligence has allowed us to first question the aspects already known in order to measure the credibility of our system, which remains above all a decision support tool for the morphological restitution of Roman Fora in North Africa. This paper presents a first experimentation of the model elaborated during this research, a model framed by a paradigmatic discussion and thus trying to position the research in relation to the existing paradigmatic and experimental knowledge on the issue.

Keywords: classical reasoning, logicist reasoning, archaeology, architecture, roman forum, morphology, calculation

Procedia PDF Downloads 132
9229 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle

Authors: Aloke Bapli, Debabrata Seth

Abstract:

Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.

Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation

Procedia PDF Downloads 140
9228 Reducing CO2 Emission Using EDA and Weighted Sum Model in Smart Parking System

Authors: Rahman Ali, Muhammad Sajjad, Farkhund Iqbal, Muhammad Sadiq Hassan Zada, Mohammed Hussain

Abstract:

Emission of Carbon Dioxide (CO2) has adversely affected the environment. One of the major sources of CO2 emission is transportation. In the last few decades, the increase in mobility of people using vehicles has enormously increased the emission of CO2 in the environment. To reduce CO2 emission, sustainable transportation system is required in which smart parking is one of the important measures that need to be established. To contribute to the issue of reducing the amount of CO2 emission, this research proposes a smart parking system. A cloud-based solution is provided to the drivers which automatically searches and recommends the most preferred parking slots. To determine preferences of the parking areas, this methodology exploits a number of unique parking features which ultimately results in the selection of a parking that leads to minimum level of CO2 emission from the current position of the vehicle. To realize the methodology, a scenario-based implementation is considered. During the implementation, a mobile application with GPS signals, vehicles with a number of vehicle features and a list of parking areas with parking features are used by sorting, multi-level filtering, exploratory data analysis (EDA, Analytical Hierarchy Process (AHP)) and weighted sum model (WSM) to rank the parking areas and recommend the drivers with top-k most preferred parking areas. In the EDA process, “2020testcar-2020-03-03”, a freely available dataset is used to estimate CO2 emission of a particular vehicle. To evaluate the system, results of the proposed system are compared with the conventional approach, which reveal that the proposed methodology supersedes the conventional one in reducing the emission of CO2 into the atmosphere.

Keywords: car parking, Co2, Co2 reduction, IoT, merge sort, number plate recognition, smart car parking

Procedia PDF Downloads 134
9227 Structural Morphing on High Performance Composite Hydrofoil to Postpone Cavitation

Authors: Fatiha Mohammed Arab, Benoit Augier, Francois Deniset, Pascal Casari, Jacques Andre Astolfi

Abstract:

For the top high performance foiling yachts, cavitation is often a limiting factor for take-off and top speed. This work investigates solutions to delay the onset of cavitation thanks to structural morphing. The structural morphing is based on compliant leading and trailing edge, with effect similar to flaps. It is shown here that the commonly accepted effect of flaps regarding the control of lift and drag forces can also be used to postpone the inception of cavitation. A numerical and experimental study is conducted in order to assess the effect of the geometric parameters of hydrofoil on their hydrodynamic performances and in cavitation inception. The effect of a 70% trailing edge and a 30% leading edge of NACA 0012 is investigated using Xfoil software at a constant Reynolds number 106. The simulations carried out for a range flaps deflections and various angles of attack. So, the result showed that the lift coefficient increase with the increase of flap deflection, but also with the increase of angle of attack and enlarged the bucket cavitation. To evaluate the efficiency of the Xfoil software, a 2D analysis flow over a NACA 0012 with leading and trailing edge flap was studied using Fluent software. The results of the two methods are in a good agreement. To validate the numerical approach, a passive adaptive composite model is built and tested in the hydrodynamic tunnel at the Research Institute of French Naval Academy. The model shows the ability to simulate the effect of flap by a LE and TE structural morphing due to hydrodynamic loading.

Keywords: cavitation, flaps, hydrofoil, panel method, xfoil

Procedia PDF Downloads 162
9226 Predictions of Dynamic Behaviors for Gas Foil Bearings Operating at Steady-State Based on Multi-Physics Coupling Computer Aided Engineering Simulations

Authors: Tai Yuan Yu, Pei-Jen Wang

Abstract:

A simulation scheme of rotational motions for predictions of bump-type gas foil bearings operating at steady-state is proposed; and, the scheme is based on multi-physics coupling computer aided engineering packages modularized with computational fluid dynamic model and structure elasticity model to numerically solve the dynamic equation of motions of a hydrodynamic loaded shaft supported by an elastic bump foil. The bump foil is assumed to be modelled as infinite number of Hookean springs mounted on stiff wall. Hence, the top foil stiffness is constant on the periphery of the bearing housing. The hydrodynamic pressure generated by the air film lubrication transfers to the top foil and induces elastic deformation needed to be solved by a finite element method program, whereas the pressure profile applied on the top foil must be solved by a finite element method program based on Reynolds Equation in lubrication theory. As a result, the equation of motions for the bearing shaft are iteratively solved via coupling of the two finite element method programs simultaneously. In conclusion, the two-dimensional center trajectory of the shaft plus the deformation map on top foil at constant rotational speed are calculated for comparisons with the experimental results.

Keywords: computational fluid dynamics, fluid structure interaction multi-physics simulations, gas foil bearing, load capacity

Procedia PDF Downloads 146
9225 Towards a Business Process Model Deriving from an Intentional Perspective

Authors: Omnia Saidani Neffati, Rim Samia Kaabi, Naoufel Kraiem

Abstract:

In this paper, we propose an approach aiming at (i) representing services at two levels: the intentional level and the organizational level, and (ii) establishing mechanisms allowing to make a transition from the first level to the second one in order to execute intentional services. An example is used to validate our approach.

Keywords: intentional service, business process, BPMN, MDE, intentional service execution

Procedia PDF Downloads 381
9224 Numerical Simulation of Footing on Reinforced Loose Sand

Authors: M. L. Burnwal, P. Raychowdhury

Abstract:

Earthquake leads to adverse effects on buildings resting on soft soils. Mitigating the response of shallow foundations on soft soil with different methods reduces settlement and provides foundation stability. Few methods such as the rocking foundation (used in Performance-based design), deep foundation, prefabricated drain, grouting, and Vibro-compaction are used to control the pore pressure and enhance the strength of the loose soils. One of the problems with these methods is that the settlement is uncontrollable, leading to differential settlement of the footings, further leading to the collapse of buildings. The present study investigates the utility of geosynthetics as a potential improvement of the subsoil to reduce the earthquake-induced settlement of structures. A steel moment-resisting frame building resting on loose liquefiable dry soil, subjected to Uttarkashi 1991 and Chamba 1995 earthquakes, is used for the soil-structure interaction (SSI) analysis. The continuum model can simultaneously simulate structure, soil, interfaces, and geogrids in the OpenSees framework. Soil is modeled with PressureDependentMultiYield (PDMY) material models with Quad element that provides stress-strain at gauss points and is calibrated to predict the behavior of Ganga sand. The model analyzed with a tied degree of freedom contact reveals that the system responses align with the shake table experimental results. An attempt is made to study the responses of footing structure and geosynthetics with unreinforced and reinforced bases with varying parameters. The result shows that geogrid reinforces shallow foundation effectively reduces the settlement by 60%.

Keywords: settlement, shallow foundation, SSI, continuum FEM

Procedia PDF Downloads 178
9223 A Regional Analysis on Co-movement of Sovereign Credit Risk and Interbank Risks

Authors: Mehdi Janbaz

Abstract:

The global financial crisis and the credit crunch that followed magnified the importance of credit risk management and its crucial role in the stability of all financial sectors and the whole of the system. Many believe that risks faced by the sovereign sector are highly interconnected with banking risks and most likely to trigger and reinforce each other. This study aims to examine (1) the impact of banking and interbank risk factors on the sovereign credit risk of Eurozone, and (2) how the EU Credit Default Swaps spreads dynamics are affected by the Crude Oil price fluctuations. The hypothesizes are tested by employing fitting risk measures and through a four-staged linear modeling approach. The sovereign senior 5-year Credit Default Swap spreads are used as a core measure of the credit risk. The monthly time-series data of the variables used in the study are gathered from the DataStream database for a period of 2008-2019. First, a linear model test the impact of regional macroeconomic and market-based factors (STOXX, VSTOXX, Oil, Sovereign Debt, and Slope) on the CDS spreads dynamics. Second, the bank-specific factors, including LIBOR-OIS spread (the difference between the Euro 3-month LIBOR rate and Euro 3-month overnight index swap rates) and Euribor, are added to the most significant factors of the previous model. Third, the global financial factors including EURO to USD Foreign Exchange Volatility, TED spread (the difference between 3-month T-bill and the 3-month LIBOR rate based in US dollars), and Chicago Board Options Exchange (CBOE) Crude Oil Volatility Index are added to the major significant factors of the first two models. Finally, a model is generated by a combination of the major factor of each variable set in addition to the crisis dummy. The findings show that (1) the explanatory power of LIBOR-OIS on the sovereign CDS spread of Eurozone is very significant, and (2) there is a meaningful adverse co-movement between the Crude Oil price and CDS price of Eurozone. Surprisingly, adding TED spread (the difference between the three-month Treasury bill and the three-month LIBOR based in US dollars.) to the analysis and beside the LIBOR-OIS spread (the difference between the Euro 3M LIBOR and Euro 3M OIS) in third and fourth models has been increased the predicting power of LIBOR-OIS. Based on the results, LIBOR-OIS, Stoxx, TED spread, Slope, Oil price, OVX, FX volatility, and Euribor are the determinants of CDS spreads dynamics in Eurozone. Moreover, the positive impact of the crisis period on the creditworthiness of the Eurozone is meaningful.

Keywords: CDS, crude oil, interbank risk, LIBOR-OIS, OVX, sovereign credit risk, TED

Procedia PDF Downloads 131
9222 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modelling and Solving

Authors: Yasin Tadayonrad

Abstract:

Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading /unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is loading/unloading capacity in each source/ destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.

Keywords: supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming

Procedia PDF Downloads 83
9221 Valorization of a Forest Waste, Modified P-Brutia Cones, by Biosorption of Methyl Geen

Authors: Derradji Chebli, Abdallah Bouguettoucha, Abdelbaki Reffas Khalil Guediri, Abdeltif Amrane

Abstract:

The removal of Methyl Green dye (MG) from aqueous solutions using modified P-brutia cones (PBH and PBN), has been investigated work. The physical parameters such as pH, temperature, initial MG concentration, ionic strength are examined in batch experiments on the sorption of the dye. Adsorption removal of MG was conducted at natural pH 4.5 because the dye is only stable in the range of pH 3.8 to 5. It was observed in experiments that the P-brutia cones treated with NaOH (PBN) exhibited high affinity and adsorption capacity compared to the MG P-brutia cones treated with HCl (PBH) and biosorption capacity of modified P-brutia cones (PBN and PBH) was enhanced by increasing the temperature. This is confirmed by the thermodynamic parameters (ΔG° and ΔH°) which show that the adsorption of MG was spontaneous and endothermic in nature. The positive values of ΔS° suggested an irregular increase in the randomness for both adsorbent (PBN and PBH) during the adsorption process. The kinetic model pseudo-first order, pseudo-second order, and intraparticle diffusion coefficient were examined to analyze the sorption process; they showed that the pseudo-second-order model is the one that best describes the adsorption process (MG) on PBN and PBH with a correlation coefficient R²> 0.999. The ionic strength has shown that it has a negative impact on the adsorption of MG on two supports. A reduction of 68.5% of the adsorption capacity for a value Ce=30 mg/L was found for the PBH, while the PBN did not show a significant influence of the ionic strength on adsorption especially in the presence of NaCl. Among the tested isotherm models, the Langmuir isotherm was found to be the most relevant to describe MG sorption onto modified P-brutia cones with a correlation factor R²>0.999. The capacity adsorption of P-brutia cones, was confirmed for the removal of a dye, MG, from aqueous solution. We note also that P-brutia cones is a material very available in the forest and low-cost biomaterial

Keywords: adsorption, p-brutia cones, forest wastes, dyes, isotherm

Procedia PDF Downloads 362