Search results for: vector error correction model (VECM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18908

Search results for: vector error correction model (VECM)

9218 Triangular Libration Points in the R3bp under Combined Effects of Oblateness, Radiation and Power-Law Profile

Authors: Babatunde James Falaye, Shi Hai Dong, Kayode John Oyewumi

Abstract:

We study the e ffects of oblateness up to J4 of the primaries and power-law density pro file (PDP) on the linear stability of libration location of an in nitesimal mass within the framework of restricted three body problem (R3BP), by using a more realistic model in which a disc with PDP is rotating around the common center of the system mass with perturbed mean motion. The existence and stability of triangular equilibrium points have been explored. It has been shown that triangular equilibrium points are stable for 0 < μ < μc and unstable for μc ≤ μ ≤ 1/2, where c denotes the critical mass parameter. We find that, the oblateness up to J2 of the primaries and the radiation reduces the stability range while the oblateness up to J4 of the primaries increases the size of stability both in the context where PDP is considered and ignored. The PDP has an e ect of about ≈0:01 reduction on the application of c to Earth-Moon and Jupiter-Moons systems. We find that the comprehensive eff ects of the perturbations have a stabilizing proclivity. However, the oblateness up to J2 of the primaries and the radiation of the primaries have tendency for instability, while coecients up to J4 of the primaries have stability predisposition. In the limiting case c = 0, and also by setting appropriate parameter(s) to zero, our results are in excellent agreement with the ones obtained previously. Libration points play a very important role in space mission and as a consequence, our results have a practical application in space dynamics and related areas. The model may be applied to study the navigation and station-keeping operations of spacecraft (in nitesimal mass) around the Jupiter (more massive) -Callisto (less massive) system, where PDP accounts for the circumsolar ring of asteroidal dust, which has a cloud of dust permanently in its wake.

Keywords: libration points, oblateness, power-law density profile, restricted three-body problem

Procedia PDF Downloads 328
9217 The Impact of an Improved Strategic Partnership Programme on Organisational Performance and Growth of Firms in the Internet Protocol Television and Hybrid Fibre-Coaxial Broadband Industry

Authors: Collen T. Masilo, Brane Semolic, Pieter Steyn

Abstract:

The Internet Protocol Television (IPTV) and Hybrid Fibre-Coaxial (HFC) Broadband industrial sector landscape are rapidly changing and organisations within the industry need to stay competitive by exploring new business models so that they can be able to offer new services and products to customers. The business challenge in this industrial sector is meeting or exceeding high customer expectations across multiple content delivery modes. The increasing challenges in the IPTV and HFC broadband industrial sector encourage service providers to form strategic partnerships with key suppliers, marketing partners, advertisers, and technology partners. The need to form enterprise collaborative networks poses a challenge for any organisation in this sector, in selecting the right strategic partners who will ensure that the organisation’s services and products are marketed in new markets. Partners who will ensure that customers are efficiently supported by meeting and exceeding their expectations. Lastly, selecting cooperation partners who will represent the organisation in a positive manner, and contribute to improving the performance of the organisation. Companies in the IPTV and HFC broadband industrial sector tend to form informal partnerships with suppliers, vendors, system integrators and technology partners. Generally, partnerships are formed without thorough analysis of the real reason a company is forming collaborations, without proper evaluations of prospective partners using specific selection criteria, and with ineffective performance monitoring of partners to ensure that a firm gains real long term benefits from its partners and gains competitive advantage. Similar tendencies are illustrated in the research case study and are based on Skyline Communications, a global leader in end-to-end, multi-vendor network management and operational support systems (OSS) solutions. The organisation’s flagship product is the DataMiner network management platform used by many operators across multiple industries and can be referred to as a smart system that intelligently manages complex technology ecosystems for its customers in the IPTV and HFC broadband industry. The approach of the research is to develop the most efficient business model that can be deployed to improve a strategic partnership programme in order to significantly improve the performance and growth of organisations participating in a collaborative network in the IPTV and HFC broadband industrial sector. This involves proposing and implementing a new strategic partnership model and its main features within the industry which should bring about significant benefits for all involved companies to achieve value add and an optimal growth strategy. The proposed business model has been developed based on the research of existing relationships, value chains and business requirements in this industrial sector and validated in 'Skyline Communications'. The outputs of the business model have been demonstrated and evaluated in the research business case study the IPTV and HFC broadband service provider 'Skyline Communications'.

Keywords: growth, partnership, selection criteria, value chain

Procedia PDF Downloads 136
9216 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm

Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy

Abstract:

IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.

Keywords: IoT, fog networks, data stewardship, dynamic access policy

Procedia PDF Downloads 64
9215 Numerical Investigation of Multiphase Flow Structure for the Flue Gas Desulfurization

Authors: Cheng-Jui Li, Chien-Chou Tseng

Abstract:

This study adopts Computational Fluid Dynamics (CFD) technique to build the multiphase flow numerical model where the interface between the flue gas and desulfurization liquid can be traced by Eulerian-Eulerian model. Inside the tower, the contact of the desulfurization liquid flow from the spray nozzles and flue gas flow can trigger chemical reactions to remove the sulfur dioxide from the exhaust gas. From experimental observations of the industrial scale plant, the desulfurization mechanism depends on the mixing level between the flue gas and the desulfurization liquid. In order to significantly improve the desulfurization efficiency, the mixing efficiency and the residence time can be increased by perforated sieve trays. Hence, the purpose of this research is to investigate the flow structure of sieve trays for the flue gas desulfurization by numerical simulation. In this study, there is an outlet at the top of FGD tower to discharge the clean gas and the FGD tower has a deep tank at the bottom, which is used to collect the slurry liquid. In the major desulfurization zone, the desulfurization liquid and flue gas have a complex mixing flow. Because there are four perforated plates in the major desulfurization zone, which spaced 0.4m from each other, and the spray array is placed above the top sieve tray, which includes 33 nozzles. Each nozzle injects desulfurization liquid that consists of the Mg(OH)2 solution. On each sieve tray, the outside diameter, the hole diameter, and the porosity are 0.6m, 20 mm and 34.3%. The flue gas flows into the FGD tower from the space between the major desulfurization zone and the deep tank can finally become clean. The desulfurization liquid and the liquid slurry goes to the bottom tank and is discharged as waste. When the desulfurization solution flow impacts the sieve tray, the downward momentum will be converted to the upper surface of the sieve tray. As a result, a thin liquid layer can be developed above the sieve tray, which is the so-called the slurry layer. And the volume fraction value within the slurry layer is around 0.3~0.7. Therefore, the liquid phase can't be considered as a discrete phase under the Eulerian-Lagrangian framework. Besides, there is a liquid column through the sieve trays. The downward liquid column becomes narrow as it interacts with the upward gas flow. After the flue gas flows into the major desulfurization zone, the flow direction of the flue gas is upward (+y) in the tube between the liquid column and the solid boundary of the FGD tower. As a result, the flue gas near the liquid column may be rolled down to slurry layer, which developed a vortex or a circulation zone between any two sieve trays. The vortex structure between two sieve trays results in a sufficient large two-phase contact area. It also increases the number of times that the flue gas interacts with the desulfurization liquid. On the other hand, the sieve trays improve the two-phase mixing, which may improve the SO2 removal efficiency.

Keywords: Computational Fluid Dynamics (CFD), Eulerian-Eulerian Model, Flue Gas Desulfurization (FGD), perforated sieve tray

Procedia PDF Downloads 289
9214 Localization of Pyrolysis and Burning of Ground Forest Fires

Authors: Pavel A. Strizhak, Geniy V. Kuznetsov, Ivan S. Voytkov, Dmitri V. Antonov

Abstract:

This paper presents the results of experiments carried out at a specialized test site for establishing macroscopic patterns of heat and mass transfer processes at localizing model combustion sources of ground forest fires with the use of barrier lines in the form of a wetted lay of material in front of the zone of flame burning and thermal decomposition. The experiments were performed using needles, leaves, twigs, and mixtures thereof. The dimensions of the model combustion source and the ranges of heat release correspond well to the real conditions of ground forest fires. The main attention is paid to the complex analysis of the effect of dispersion of water aerosol (concentration and size of droplets) used to form the barrier line. It is shown that effective conditions for localization and subsequent suppression of flame combustion and thermal decomposition of forest fuel can be achieved by creating a group of barrier lines with different wetting width and depth of the material. Relative indicators of the effectiveness of one and combined barrier lines were established, taking into account all the main characteristics of the processes of suppressing burning and thermal decomposition of forest combustible materials. We performed the prediction of the necessary and sufficient parameters of barrier lines (water volume, width, and depth of the wetted lay of the material, specific irrigation density) for combustion sources with different dimensions, corresponding to the real fire extinguishing practice.

Keywords: forest fire, barrier water lines, pyrolysis front, flame front

Procedia PDF Downloads 140
9213 Adolescents Psychological Well Being in Relation to Bullying/CB Victimization: The Mediating Effect of Resilience and Self-Concept

Authors: Dorit Olenik-Shemesh, Tali Heiman

Abstract:

Aggressive peer behaviors, particularly bullying and cyberbullying (CB) victimization during adolescence, are strongly and consistently linked to decreased levels of subjective well-being, potentially hindering a healthy and consistent developmental process. These negative effects might be expressed in emotional, physical, and behavioral difficulties. Adolescents victims of bullying/CB present more depressive moods, more loneliness, and more suicidal thoughts, while adolescents who had never been victims of bullying and CB acts present higher levels of well-being. These difficulties in their lives may be both a consequence of and a partial explanation for bullying/CB victimization. Interpersonal behavior styles and psychosocial factors may interact to create a vicious cycle in which adolescents place themselves at risk, which might explain the reduced well-being reported among victims. Yet, to the best of our knowledge, almost no study has examined the effect of two key variables in adolescents' lives, resilience and self-concept, in the relationship between bullying/CB victimization and low levels of psychological well-being among adolescents. Resilience is defined as the individual's capacity of maintaining stable functioning and make adjustments in the face of adversity; a capacity that promotes efficiently coping with environmental stressors and protects from psycho-social difficulties when facing various challenges. Self-concept relates to the way we perceive ourselves, influenced by many forces, including our interactions with the surroundings; a collection of beliefs about oneself. Accordingly, the current study has examined the possible mediating effect of these two main positive personal variables, resilience, and self-concept, through a mediation model analysis. 507 middle school students aged 11–16 (53% boys, 47% girls) completed questionnaires regarding bullying and CB behaviors, psychological well-being, resilience, and self-concept. A mediation model analysis was performed, whereas the hypothesized mediation model was accepted in full. More specifically, it was found that both self-concept and resilience mediated the relationship between bullying/CB victimization and a sense of well-being. High levels of both variables might buffer against a potential decrease in well-being associated with youth bullying/CB victimization. No gender differences were found, except a small stronger effect of resilience on well-being for boys. The study results suggest focusing on specific personal positive variables when developing youth intervention programs, creating an infrastructure for new programs that address increasing resilience and self-concept in schools and family-school contexts. Such revamped programs could diminish bullying/CB acts and the harmful negative implications for youth well-being. Future studies that will incorporate longitudinal data may further deepen the understanding of these examined relationships.

Keywords: adolescents, well being, bullying/CB victimization, resilience, self-concept

Procedia PDF Downloads 18
9212 Analyzing the Support to Fisheries in the European Union: Modelling Budgetary Transfers in Wild Fisheries

Authors: Laura Angulo, Petra Salamon, Martin Banse, Frederic Storkamp

Abstract:

Fisheries subsidies are focus on reduce management costs or deliver income benefits to fishers. In 2015, total fishery budgetary transfers in 31 OECD countries represented 35% of their total landing value. However, subsidies to fishing have adverse effects on trade and it has been claimed that they may contribute directly to overfishing. Therefore, this paper analyses to what extend fisheries subsidies may 1) influence capture production facing quotas and 2) affect price dynamics. The study uses the fish module in AGMEMOD (Agriculture Member States Modelling, details see Chantreuil et al. (2012)) which covers eight fish categories (cephalopods; crustaceans; demersal marine fish; pelagic marine fish; molluscs excl. cephalopods; other marine finfish species; freshwater and diadromous fish) for EU member states and other selected countries developed under the SUCCESS project. This model incorporates transfer payments directly linked to fisheries operational costs. As aquaculture and wild fishery are not included within the WTO Agreement on Agriculture, data on fisheries subsidies is obtained from the OECD Fisheries Support Estimates (FSE) database, which provides statistics on budgetary transfers to the fisheries sector. Since support has been moving from budgetary transfers to General Service Support Estimate the last years, subsidies in capture production may not present substantial effects. Nevertheless, they would still show the impact across countries and fish categories within the European Union.

Keywords: AGMEMOD, budgetary transfers, EU Member States, fish model, fisheries support estimate

Procedia PDF Downloads 252
9211 Defining Priority Areas for Biodiversity Conservation to Support for Zoning Protected Areas: A Case Study from Vietnam

Authors: Xuan Dinh Vu, Elmar Csaplovics

Abstract:

There has been an increasing need for methods to define priority areas for biodiversity conservation since the effectiveness of biodiversity conservation in protected areas largely depends on the availability of material resources. The identification of priority areas requires the integration of biodiversity data together with social data on human pressures and responses. However, the deficit of comprehensive data and reliable methods becomes a key challenge in zoning where the demand for conservation is most urgent and where the outcomes of conservation strategies can be maximized. In order to fill this gap, the study applied an environmental model Condition–Pressure–Response to suggest a set of criteria to identify priority areas for biodiversity conservation. Our empirical data has been compiled from 185 respondents, categorizing into three main groups: governmental administration, research institutions, and protected areas in Vietnam by using a well - designed questionnaire. Then, the Analytic Hierarchy Process (AHP) theory was used to identify the weight of all criteria. Our results have shown that priority level for biodiversity conservation could be identified by three main indicators: condition, pressure, and response with the value of the weight of 26%, 41%, and 33%, respectively. Based on the three indicators, 7 criteria and 15 sub-criteria were developed to support for defining priority areas for biodiversity conservation and zoning protected areas. In addition, our study also revealed that the groups of governmental administration and protected areas put a focus on the 'Pressure' indicator while the group of Research Institutions emphasized the importance of 'Response' indicator in the evaluation process. Our results provided recommendations to apply the developed criteria for identifying priority areas for biodiversity conservation in Vietnam.

Keywords: biodiversity conservation, condition–pressure–response model, criteria, priority areas, protected areas

Procedia PDF Downloads 177
9210 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker

Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.

Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation

Procedia PDF Downloads 31
9209 Views from Shores Past: Palaeogeographic Reconstructions as an Aid for Interpreting the Movement of Early Modern Humans on and between the Islands of Wallacea

Authors: S. Kealy, J. Louys, S. O’Connor

Abstract:

The island archipelago that stretches between the continents of Sunda (Southeast Asia) and Sahul (Australia - New Guinea) and comprising much of modern-day Indonesia as well as Timor-Leste, represents the biogeographic region of Wallacea. The islands of Wallaea are significant archaeologically as they have never been connected to the mainlands of either Sunda or Sahul, and thus the colonization by early modern humans of these islands and subsequently Australia and New Guinea, would have necessitated some form of water crossings. Accurate palaeogeographic reconstructions of the Wallacean Archipelago for this time are important not only for modeling likely routes of colonization but also for reconstructing likely landscapes and hence resources available to the first colonists. Here we present five digital reconstructions of coastal outlines of Wallacea and Sahul (Australia and New Guinea) for the periods 65, 60, 55, 50, and 45,000 years ago using the latest bathometric chart and a sea-level model that is adjusted to account for the average uplift rate known from Wallacea. This data was also used to reconstructed island areal extent as well as topography for each time period. These reconstructions allowed us to determine the distance from the coast and relative elevation of the earliest archaeological sites for each island where such records exist. This enabled us to approximate how much effort exploitation of coastal resources would have taken for early colonists, and how important such resources were. These reconstructions also allowed us to estimate visibility for each island in the archipelago, and to model how intervisible each island was during the period of likely human colonisation. We demonstrate how these models provide archaeologists with an important basis for visualising this ancient landscape and interpreting how it was originally viewed, traversed and exploited by its earliest modern human inhabitants.

Keywords: Wallacea, palaeogeographic reconstructions, islands, intervisibility

Procedia PDF Downloads 214
9208 The Differences and the Similarities between Corporate Governance Principles in Islamic Banks and Conventional Banks

Authors: Osama Shibani

Abstract:

Corporate governance effective is critical to the proper functioning of the banking sector and the economy as a whole, the Basel Committee have issued principles of corporate governance inspired from Organisation for Economic Co-operation and Development (OECD), but there is no single model of corporate governance that can work well in every country; each country, or even each organization should develop its own model that can cater for its specific needs and objectives, the corporate governance in Islamic Institutions is unique and offers a particular structure and guided by a control body which is Shariah supervisory Board (SSB), for this reason Islamic Financial Services Board in Malaysia (IFSB) has amended BCBS corporate governance principles commensurate with Islamic financial Institutions to suit the nature of the work of Islamic institutions, this paper highlight these amended by using comparative analysis method in context of the differences of corporate governance structure of Islamic banks and conventional banks. We find few different between principles (Principle 1: The Board's overall responsibilities, Principles 3: Board’s own structure and practices, Principles 9: Compliance, Principle 10: Internal audit, Principle 12: Disclosure and transparency) and there are similarities between principles (Principle 2: Board qualifications and composition, Principles 4: Senior Management (composition and tasks), Principle 6: Risk Management and Principle 8: Risk communication). Finally, we found that corporate governance principles issued by Islamic Financial Services Board (IFSB) are complemented to CG principles of Basel Committee on Banking Supervision (BCBS) with some modifications to suit the composition of Islamic banks, there are deficiencies in the interest of the Basel Committee to Islamic banks.

Keywords: basel committee (BCBS), corporate governance principles, Islamic financial services board (IFSB), agency theory

Procedia PDF Downloads 301
9207 Structure Clustering for Milestoning Applications of Complex Conformational Transitions

Authors: Amani Tahat, Serdal Kirmizialtin

Abstract:

Trajectory fragment methods such as Markov State Models (MSM), Milestoning (MS) and Transition Path sampling are the prime choice of extending the timescale of all atom Molecular Dynamics simulations. In these approaches, a set of structures that covers the accessible phase space has to be chosen a priori using cluster analysis. Structural clustering serves to partition the conformational state into natural subgroups based on their similarity, an essential statistical methodology that is used for analyzing numerous sets of empirical data produced by Molecular Dynamics (MD) simulations. Local transition kernel among these clusters later used to connect the metastable states using a Markovian kinetic model in MSM and a non-Markovian model in MS. The choice of clustering approach in constructing such kernel is crucial since the high dimensionality of the biomolecular structures might easily confuse the identification of clusters when using the traditional hierarchical clustering methodology. Of particular interest, in the case of MS where the milestones are very close to each other, accurate determination of the milestone identity of the trajectory becomes a challenging issue. Throughout this work we present two cluster analysis methods applied to the cis–trans isomerism of dinucleotide AA. The choice of nucleic acids to commonly used proteins to study the cluster analysis is two fold: i) the energy landscape is rugged; hence transitions are more complex, enabling a more realistic model to study conformational transitions, ii) Nucleic acids conformational space is high dimensional. A diverse set of internal coordinates is necessary to describe the metastable states in nucleic acids, posing a challenge in studying the conformational transitions. Herein, we need improved clustering methods that accurately identify the AA structure in its metastable states in a robust way for a wide range of confused data conditions. The single linkage approach of the hierarchical clustering available in GROMACS MD-package is the first clustering methodology applied to our data. Self Organizing Map (SOM) neural network, that also known as a Kohonen network, is the second data clustering methodology. The performance comparison of the neural network as well as hierarchical clustering method is studied by means of computing the mean first passage times for the cis-trans conformational rates. Our hope is that this study provides insight into the complexities and need in determining the appropriate clustering algorithm for kinetic analysis. Our results can improve the effectiveness of decisions based on clustering confused empirical data in studying conformational transitions in biomolecules.

Keywords: milestoning, self organizing map, single linkage, structure clustering

Procedia PDF Downloads 225
9206 Experiment-Based Teaching Method for the Varying Frictional Coefficient

Authors: Mihaly Homostrei, Tamas Simon, Dorottya Schnider

Abstract:

The topic of oscillation in physics is one of the key ideas which is usually taught based on the concept of harmonic oscillation. It can be an interesting activity to deal with a frictional oscillator in advanced high school classes or in university courses. Its mechanics are investigated in this research, which shows that the motion of the frictional oscillator is more complicated than a simple harmonic oscillator. The physics of the applied model in this study seems to be interesting and useful for undergraduate students. The study presents a well-known physical system, which is mostly discussed theoretically in high school and at the university. The ideal frictional oscillator is normally used as an example of harmonic oscillatory motion, as its theory relies on the constant coefficient of sliding friction. The structure of the system is simple: a rod with a homogeneous mass distribution is placed on two rotating identical cylinders placed at the same height so that they are horizontally aligned, and they rotate at the same angular velocity, however in opposite directions. Based on this setup, one could easily show that the equation of motion describes a harmonic oscillation considering the magnitudes of the normal forces in the system as the function of the position and the frictional forces with a constant coefficient of frictions are related to them. Therefore, the whole description of the model relies on simple Newtonian mechanics, which is available for students even in high school. On the other hand, the phenomenon of the described frictional oscillator does not seem to be so straightforward after all; experiments show that the simple harmonic oscillation cannot be observed in all cases, and the system performs a much more complex movement, whereby the rod adjusts itself to a non-harmonic oscillation with a nonzero stable amplitude after an unconventional damping effect. The stable amplitude, in this case, means that the position function of the rod converges to a harmonic oscillation with a constant amplitude. This leads to the idea of a more complex model which can describe the motion of the rod in a more accurate way. The main difference to the original equation of motion is the concept that the frictional coefficient varies with the relative velocity. This dependence on the velocity was investigated in many different research articles as well; however, this specific problem could demonstrate the key concept of the varying friction coefficient and its importance in an interesting and demonstrative way. The position function of the rod is described by a more complicated and non-trivial, yet more precise equation than the usual harmonic oscillation description of the movement. The study discusses the structure of the measurements related to the frictional oscillator, the qualitative and quantitative derivation of the theory, and the comparison of the final theoretical function as well as the measured position-function in time. The project provides useful materials and knowledge for undergraduate students and a new perspective in university physics education.

Keywords: friction, frictional coefficient, non-harmonic oscillator, physics education

Procedia PDF Downloads 195
9205 Segmented Pupil Phasing with Deep Learning

Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan

Abstract:

Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.

Keywords: wavefront sensing, deep learning, deployable telescope, space telescope

Procedia PDF Downloads 108
9204 Clustering for Detection of the Population at Risk of Anticholinergic Medication

Authors: A. Shirazibeheshti, T. Radwan, A. Ettefaghian, G. Wilson, C. Luca, Farbod Khanizadeh

Abstract:

Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature, which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on over 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. To further evaluate the performance of the model, any association between the average risk score within each group and other factors such as socioeconomic status (i.e., Index of Multiple Deprivation) and an index of health and disability were investigated. The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings also show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, indicating that females are more at risk from this kind of multiple medications. The risk may be monitored and controlled in well artificial intelligence-equipped healthcare management systems.

Keywords: anticholinergic medicines, clustering, deprivation, socioeconomic status

Procedia PDF Downloads 216
9203 Impact of Unusual Dust Event on Regional Climate in India

Authors: Kanika Taneja, V. K. Soni, Kafeel Ahmad, Shamshad Ahmad

Abstract:

A severe dust storm generated from a western disturbance over north Pakistan and adjoining Afghanistan affected the north-west region of India between May 28 and 31, 2014, resulting in significant reductions in air quality and visibility. The air quality of the affected region degraded drastically. PM10 concentration peaked at a very high value of around 1018 μgm-3 during dust storm hours of May 30, 2014 at New Delhi. The present study depicts aerosol optical properties monitored during the dust days using ground based multi-wavelength Sky radiometer over the National Capital Region of India. High Aerosol Optical Depth (AOD) at 500 nm was observed as 1.356 ± 0.19 at New Delhi while Angstrom exponent (Alpha) dropped to 0.287 on May 30, 2014. The variation in the Single Scattering Albedo (SSA) and real n(λ) and imaginary k(λ) parts of the refractive index indicated that the dust event influences the optical state to be more absorbing. The single scattering albedo, refractive index, volume size distribution and asymmetry parameter (ASY) values suggested that dust aerosols were predominant over the anthropogenic aerosols in the urban environment of New Delhi. The large reduction in the radiative flux at the surface level caused significant cooling at the surface. Direct Aerosol Radiative Forcing (DARF) was calculated using a radiative transfer model during the dust period. A consistent increase in surface cooling was evident, ranging from -31 Wm-2 to -82 Wm-2 and an increase in heating of the atmosphere from 15 Wm-2 to 92 Wm-2 and -2 Wm-2 to 10 Wm-2 at top of the atmosphere.

Keywords: aerosol optical properties, dust storm, radiative transfer model, sky radiometer

Procedia PDF Downloads 382
9202 Problem Solving Courts for Domestic Violence Offenders: Duluth Model Application in Spanish-Speaking Offenders

Authors: I. Salas-Menotti

Abstract:

Problem-solving courts were created to assist offenders with specific needs that were not addressed properly in traditional courts. Problem-solving courts' main objective is to pursue solutions that will benefit the offender, the victim, and society as well. These courts were developed as an innovative response to deal with issues such as drug abuse, mental illness, and domestic violence. In Brooklyn, men who are charged with domestic violence related offenses for the first time are offered plea bargains that include the attendance to a domestic abuse intervention program as a condition to dismiss the most serious charges and avoid incarceration. The desired outcome is that the offender will engage in a program that will modify his behavior avoiding new incidents of domestic abuse, it requires accountability towards the victim and finally, it will hopefully bring down statistic related to domestic abuse incidents. This paper will discuss the effectiveness of the Duluth model as applied to Spanish-speaking men mandated to participate in the program by the specialized domestic violence courts in Brooklyn. A longitudinal study was conducted with 243 Spanish- speaking men who were mandated to participated in the men's program offered by EAC in Brooklyn in the years 2016 through 2018 to determine the recidivism rate of domestic violence crimes. Results show that the recidivism rate was less than 5% per year after completing the program which indicates that the intervention is effective in preventing new abuse allegations and subsequent arrests. It's recommended that comparative study with English-speaking participants is conducted to determine cultural and language variables affecting the program's efficacy.

Keywords: domestic violence, domestic abuse intervention programs, Problem solving courts, Spanish-speaking offenders

Procedia PDF Downloads 136
9201 European Hinterland and Foreland: Impact of Accessibility, Connectivity, Inter-Port Competition on Containerization

Authors: Dial Tassadit Rania, Figueiredo De Oliveira Gabriel

Abstract:

In this paper, we investigate the relationship between ports and their hinterland and foreland environments and the competitive relationship between the ports themselves. These two environments are changing, evolving and introducing new challenges for commercial and economic development at the regional, national and international levels. Because of the rise of the containerization phenomenon, shipping costs and port handling costs have considerably decreased due to economies of scale. The volume of maritime trade has increased substantially and the markets served by the ports have expanded. On these bases, overlapping hinterlands can give rise to the phenomenon of competition between ports. Our main contribution comparing to the existing literature on this issue, is to build a set of hinterland, foreland and competition indicators. Using these indicators? we investigate the effect of hinterland accessibility, foreland connectivity and inter-ports competition on containerized traffic of Europeans ports. For this, we have a 10-year panel database from 2004 to 2014. Our hinterland indicators are given by two indicators of accessibility; they describe the market potential of a port and are calculated using information on population and wealth (GDP). We then calculate population and wealth for different neighborhoods within a distance from a port ranging from 100 to 1000km. For the foreland, we produce two indicators: port connectivity and number of partners for each port. Finally, we compute the two indicators of inter-port competition and a market concentration indicator (Hirshmann-Herfindhal) for different neighborhood-distances around the port. We then apply a fixed-effect model to test the relationship above. Again, with a fixed effects model, we do a sensitivity analysis for each of these indicators to support the results obtained. The econometric results of the general model given by the regression of the accessibility indicators, the LSCI for port i, and the inter-port competition indicator on the containerized traffic of European ports show a positive and significant effect for accessibility to wealth and not to the population. The results are positive and significant for the two indicators of connectivity and competition as well. One of the main results of this research is that the port development given here by the increase of its containerized traffic is strongly related to the development of its hinterland and foreland environment. In addition, it is the market potential, given by the wealth of the hinterland that has an impact on the containerized traffic of a port. However, accessibility to a large population pool is not important for understanding the dynamics of containerized port traffic. Furthermore, in order to continue to develop, a port must penetrate its hinterland at a deep level exceeding 100 km around the port and seek markets beyond this perimeter. The port authorities could focus their marketing efforts on the immediate hinterland, which can, as the results shows, not be captive and thus engage new approaches of port governance to make it more attractive.

Keywords: accessibility, connectivity, European containerization, European hinterland and foreland, inter-port competition

Procedia PDF Downloads 197
9200 Measuring Principal and Teacher Cultural Competency: A Need Assessment of Three Proximate PreK-5 Schools

Authors: Teresa Caswell

Abstract:

Throughout the United States and within a myriad of demographic contexts, students of color experience the results of systemic inequities as an academic outcome. These disparities continue despite the increased resources provided to students and ongoing instruction-focused professional learning received by teachers. The researcher postulated that lower levels of educator cultural competency are an underlying factor of why resource and instructional interventions are less effective than desired. Before implementing any type of intervention, however, cultural competency needed to be confirmed as a factor in schools demonstrating academic disparities between racial subgroups. A needs assessment was designed to measure levels of individual beliefs, including cultural competency, in both principals and teachers at three neighboring schools verified to have academic disparities. The resulting mixed method study utilized the Optimal Theory Applied to Identity Development (OTAID) model to measure cultural competency quantitatively, through self-identity inventory survey items, with teachers and qualitatively, through one-on-one interviews, with each school’s principal. A joint display was utilized to see combined data within and across school contexts. Each school was confirmed to have misalignments between principal and teacher levels of cultural competency beliefs while also indicating that a number of participants in the self-identity inventory survey may have intentionally skipped items referencing the term oppression. Additional use of the OTAID model and self-identity inventory in future research and across contexts is needed to determine transferability and dependability as cultural competency measures.

Keywords: cultural competency, identity development, mixed-method analysis, needs assessment

Procedia PDF Downloads 156
9199 Infrastructure Sharing Synergies: Optimal Capacity Oversizing and Pricing

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) deals with both substitution synergies (exchange of waste materials, fatal energy and utilities as resources for production) and infrastructure/service sharing synergies. The latter is based on the intensification of use of an asset and thus requires to balance capital costs increments with snowball effects (network externalities) for its implementation. Initial investors must specify ex-ante arrangements (cost sharing and pricing schedule) to commit toward investments in capacities and transactions. Our model investigate the decision of 2 actors trying to choose cooperatively a level of infrastructure capacity oversizing to set a plug-and-play offer to a potential entrant whose capacity requirement is randomly distributed while satisficing their own requirements. Capacity cost exhibits sub-additive property so that there is room for profitable overcapacity setting in the first period. The entrant’s willingness-to-pay for the access to the infrastructure is dependent upon its standalone cost and the capacity gap that it must complete in case the available capacity is insufficient ex-post (the complement cost). Since initial capacity choices are driven by ex-ante (expected) yield extractible from the entrant we derive the expected complement cost function which helps us defining the investors’ objective function. We first show that this curve is decreasing and convex in the capacity increments and that it is shaped by the distribution function of the potential entrant’s requirements. We then derive the general form of solutions and solve the model for uniform and triangular distributions. Depending on requirements volumes and cost assumptions different equilibria occurs. We finally analyze the effect of a per-unit subsidy a public actor would apply to foster such sharing synergies.

Keywords: capacity, cooperation, industrial symbiosis, pricing

Procedia PDF Downloads 219
9198 The Roman Fora in North Africa Towards a Supportive Protocol to the Decision for the Morphological Restitution

Authors: Dhouha Laribi Galalou, Najla Allani Bouhoula, Atef Hammouda

Abstract:

This research delves into the fundamental question of the morphological restitution of built archaeology in order to place it in its paradigmatic context and to seek answers to it. Indeed, the understanding of the object of the study, its analysis, and the methodology of solving the morphological problem posed, are manageable aspects only by means of a thoughtful strategy that draws on well-defined epistemological scaffolding. In this stream, the crisis of natural reasoning in archaeology has generated multiple changes in this field, ranging from the use of new tools to the integration of an archaeological information system where urbanization involves the interplay of several disciplines. The built archaeological topic is also an architectural and morphological object. It is also a set of articulated elementary data, the understanding of which is about to be approached from a logicist point of view. Morphological restitution is no exception to the rule, and the inter-exchange between the different disciplines uses the capacity of each to frame the reflection on the incomplete elements of a given architecture or on its different phases and multiple states of existence. The logicist sequence is furnished by the set of scattered or destroyed elements found, but also by what can be called a rule base which contains the set of rules for the architectural construction of the object. The knowledge base built from the archaeological literature also provides a reference that enters into the game of searching for forms and articulations. The choice of the Roman Forum in North Africa is justified by the great urban and architectural characteristics of this entity. The research on the forum involves both a fairly large knowledge base but also provides the researcher with material to study - from a morphological and architectural point of view - starting from the scale of the city down to the architectural detail. The experimentation of the knowledge deduced on the paradigmatic level, as well as the deduction of an analysis model, is then carried out on the basis of a well-defined context which contextualises the experimentation from the elaboration of the morphological information container attached to the rule base and the knowledge base. The use of logicist analysis and artificial intelligence has allowed us to first question the aspects already known in order to measure the credibility of our system, which remains above all a decision support tool for the morphological restitution of Roman Fora in North Africa. This paper presents a first experimentation of the model elaborated during this research, a model framed by a paradigmatic discussion and thus trying to position the research in relation to the existing paradigmatic and experimental knowledge on the issue.

Keywords: classical reasoning, logicist reasoning, archaeology, architecture, roman forum, morphology, calculation

Procedia PDF Downloads 152
9197 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle

Authors: Aloke Bapli, Debabrata Seth

Abstract:

Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.

Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation

Procedia PDF Downloads 160
9196 Fabrication of Electrospun Green Fluorescent Protein Nano-Fibers for Biomedical Applications

Authors: Yakup Ulusu, Faruk Ozel, Numan Eczacioglu, Abdurrahman Ozen, Sabriye Acikgoz

Abstract:

GFP discovered in the mid-1970s, has been used as a marker after replicated genetic study by scientists. In biotechnology, cell, molecular biology, the GFP gene is frequently used as a reporter of expression. In modified forms, it has been used to make biosensors. Many animals have been created that express GFP as an evidence that a gene can be expressed throughout a given organism. Proteins labeled with GFP identified locations are determined. And so, cell connections can be monitored, gene expression can be reported, protein-protein interactions can be observed and signals that create events can be detected. Additionally, monitoring GFP is noninvasive; it can be detected by under UV-light because of simply generating fluorescence. Moreover, GFP is a relatively small and inert molecule, that does not seem to treat any biological processes of interest. The synthesis of GFP has some steps like, to construct the plasmid system, transformation in E. coli, production and purification of protein. GFP carrying plasmid vector pBAD–GFPuv was digested using two different restriction endonuclease enzymes (NheI and Eco RI) and DNA fragment of GFP was gel purified before cloning. The GFP-encoding DNA fragment was ligated into pET28a plasmid using NheI and Eco RI restriction sites. The final plasmid was named pETGFP and DNA sequencing of this plasmid indicated that the hexa histidine-tagged GFP was correctly inserted. Histidine-tagged GFP was expressed in an Escherichia coli BL21 DE3 (pLysE) strain. The strain was transformed with pETGFP plasmid and grown on LuiraBertoni (LB) plates with kanamycin and chloramphenicol selection. E. coli cells were grown up to an optical density (OD 600) of 0.8 and induced by the addition of a final concentration of 1mM isopropyl-thiogalactopyranoside (IPTG) and then grown for additional 4 h. The amino-terminal hexa-histidine-tag facilitated purification of the GFP by using a His Bind affinity chromatography resin (Novagen). Purity of GFP protein was analyzed by a 12 % sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE). The concentration of protein was determined by UV absorption at 280 nm (Varian Cary 50 Scan UV/VIS spectrophotometer). Synthesis of GFP-Polymer composite nanofibers was produced by using GFP solution (10mg/mL) and polymer precursor Polyvinylpyrrolidone, (PVP, Mw=1300000) as starting materials and template, respectively. For the fabrication of nanofibers with the different fiber diameter; a sol–gel solution comprising of 0.40, 0.60 and 0.80 g PVP (depending upon the desired fiber diameter) and 100 mg GFP in 10 mL water: ethanol (3:2) mixtures were prepared and then the solution was covered on collecting plate via electro spinning at 10 kV with a feed-rate of 0.25 mL h-1 using Spellman electro spinning system. Results show that GFP-based nano-fiber can be used plenty of biomedical applications such as bio-imaging, bio-mechanic, bio-material and tissue engineering.

Keywords: biomaterial, GFP, nano-fibers, protein expression

Procedia PDF Downloads 321
9195 Reducing CO2 Emission Using EDA and Weighted Sum Model in Smart Parking System

Authors: Rahman Ali, Muhammad Sajjad, Farkhund Iqbal, Muhammad Sadiq Hassan Zada, Mohammed Hussain

Abstract:

Emission of Carbon Dioxide (CO2) has adversely affected the environment. One of the major sources of CO2 emission is transportation. In the last few decades, the increase in mobility of people using vehicles has enormously increased the emission of CO2 in the environment. To reduce CO2 emission, sustainable transportation system is required in which smart parking is one of the important measures that need to be established. To contribute to the issue of reducing the amount of CO2 emission, this research proposes a smart parking system. A cloud-based solution is provided to the drivers which automatically searches and recommends the most preferred parking slots. To determine preferences of the parking areas, this methodology exploits a number of unique parking features which ultimately results in the selection of a parking that leads to minimum level of CO2 emission from the current position of the vehicle. To realize the methodology, a scenario-based implementation is considered. During the implementation, a mobile application with GPS signals, vehicles with a number of vehicle features and a list of parking areas with parking features are used by sorting, multi-level filtering, exploratory data analysis (EDA, Analytical Hierarchy Process (AHP)) and weighted sum model (WSM) to rank the parking areas and recommend the drivers with top-k most preferred parking areas. In the EDA process, “2020testcar-2020-03-03”, a freely available dataset is used to estimate CO2 emission of a particular vehicle. To evaluate the system, results of the proposed system are compared with the conventional approach, which reveal that the proposed methodology supersedes the conventional one in reducing the emission of CO2 into the atmosphere.

Keywords: car parking, Co2, Co2 reduction, IoT, merge sort, number plate recognition, smart car parking

Procedia PDF Downloads 149
9194 Structural Morphing on High Performance Composite Hydrofoil to Postpone Cavitation

Authors: Fatiha Mohammed Arab, Benoit Augier, Francois Deniset, Pascal Casari, Jacques Andre Astolfi

Abstract:

For the top high performance foiling yachts, cavitation is often a limiting factor for take-off and top speed. This work investigates solutions to delay the onset of cavitation thanks to structural morphing. The structural morphing is based on compliant leading and trailing edge, with effect similar to flaps. It is shown here that the commonly accepted effect of flaps regarding the control of lift and drag forces can also be used to postpone the inception of cavitation. A numerical and experimental study is conducted in order to assess the effect of the geometric parameters of hydrofoil on their hydrodynamic performances and in cavitation inception. The effect of a 70% trailing edge and a 30% leading edge of NACA 0012 is investigated using Xfoil software at a constant Reynolds number 106. The simulations carried out for a range flaps deflections and various angles of attack. So, the result showed that the lift coefficient increase with the increase of flap deflection, but also with the increase of angle of attack and enlarged the bucket cavitation. To evaluate the efficiency of the Xfoil software, a 2D analysis flow over a NACA 0012 with leading and trailing edge flap was studied using Fluent software. The results of the two methods are in a good agreement. To validate the numerical approach, a passive adaptive composite model is built and tested in the hydrodynamic tunnel at the Research Institute of French Naval Academy. The model shows the ability to simulate the effect of flap by a LE and TE structural morphing due to hydrodynamic loading.

Keywords: cavitation, flaps, hydrofoil, panel method, xfoil

Procedia PDF Downloads 183
9193 Predictions of Dynamic Behaviors for Gas Foil Bearings Operating at Steady-State Based on Multi-Physics Coupling Computer Aided Engineering Simulations

Authors: Tai Yuan Yu, Pei-Jen Wang

Abstract:

A simulation scheme of rotational motions for predictions of bump-type gas foil bearings operating at steady-state is proposed; and, the scheme is based on multi-physics coupling computer aided engineering packages modularized with computational fluid dynamic model and structure elasticity model to numerically solve the dynamic equation of motions of a hydrodynamic loaded shaft supported by an elastic bump foil. The bump foil is assumed to be modelled as infinite number of Hookean springs mounted on stiff wall. Hence, the top foil stiffness is constant on the periphery of the bearing housing. The hydrodynamic pressure generated by the air film lubrication transfers to the top foil and induces elastic deformation needed to be solved by a finite element method program, whereas the pressure profile applied on the top foil must be solved by a finite element method program based on Reynolds Equation in lubrication theory. As a result, the equation of motions for the bearing shaft are iteratively solved via coupling of the two finite element method programs simultaneously. In conclusion, the two-dimensional center trajectory of the shaft plus the deformation map on top foil at constant rotational speed are calculated for comparisons with the experimental results.

Keywords: computational fluid dynamics, fluid structure interaction multi-physics simulations, gas foil bearing, load capacity

Procedia PDF Downloads 165
9192 Towards a Business Process Model Deriving from an Intentional Perspective

Authors: Omnia Saidani Neffati, Rim Samia Kaabi, Naoufel Kraiem

Abstract:

In this paper, we propose an approach aiming at (i) representing services at two levels: the intentional level and the organizational level, and (ii) establishing mechanisms allowing to make a transition from the first level to the second one in order to execute intentional services. An example is used to validate our approach.

Keywords: intentional service, business process, BPMN, MDE, intentional service execution

Procedia PDF Downloads 397
9191 Numerical Simulation of Footing on Reinforced Loose Sand

Authors: M. L. Burnwal, P. Raychowdhury

Abstract:

Earthquake leads to adverse effects on buildings resting on soft soils. Mitigating the response of shallow foundations on soft soil with different methods reduces settlement and provides foundation stability. Few methods such as the rocking foundation (used in Performance-based design), deep foundation, prefabricated drain, grouting, and Vibro-compaction are used to control the pore pressure and enhance the strength of the loose soils. One of the problems with these methods is that the settlement is uncontrollable, leading to differential settlement of the footings, further leading to the collapse of buildings. The present study investigates the utility of geosynthetics as a potential improvement of the subsoil to reduce the earthquake-induced settlement of structures. A steel moment-resisting frame building resting on loose liquefiable dry soil, subjected to Uttarkashi 1991 and Chamba 1995 earthquakes, is used for the soil-structure interaction (SSI) analysis. The continuum model can simultaneously simulate structure, soil, interfaces, and geogrids in the OpenSees framework. Soil is modeled with PressureDependentMultiYield (PDMY) material models with Quad element that provides stress-strain at gauss points and is calibrated to predict the behavior of Ganga sand. The model analyzed with a tied degree of freedom contact reveals that the system responses align with the shake table experimental results. An attempt is made to study the responses of footing structure and geosynthetics with unreinforced and reinforced bases with varying parameters. The result shows that geogrid reinforces shallow foundation effectively reduces the settlement by 60%.

Keywords: settlement, shallow foundation, SSI, continuum FEM

Procedia PDF Downloads 198
9190 Self-Esteem on University Students by Gender and Branch of Study

Authors: Antonio Casero Martínez, María de Lluch Rayo Llinas

Abstract:

This work is part of an investigation into the relationship between romantic love and self-esteem in college students, performed by the students of matter "methods and techniques of social research", of the Master Gender at the University of Balearic Islands, during 2014-2015. In particular, we have investigated the relationships that may exist between self-esteem, gender and field of study. They are known as gender differences in self-esteem, and the relationship between gender and branch of study observed annually by the distribution of enrolment in universities. Therefore, in this part of the study, we focused the spotlight on the differences in self-esteem between the sexes through the various branches of study. The study sample consists of 726 individuals (304 men and 422 women) from 30 undergraduate degrees that the University of the Balearic Islands offers on its campus in 2014-2015, academic year. The average age of men was 21.9 years and 21.7 years for women. The sampling procedure used was random sampling stratified by degree, simple affixation, giving a sampling error of 3.6% for the whole sample, with a confidence level of 95% under the most unfavorable situation (p = q). The Spanish translation of the Rosenberg Self-Esteen Scale (RSE), by Atienza, Moreno and Balaguer was applied. The psychometric properties of translation reach a test-retest reliability of 0.80 and an internal consistency between 0.76 and 0.87. In this paper we have obtained an internal consistency of 0.82. The results confirm the expected differences in self-esteem by gender, although not in all branches of study. Mean levels of self-esteem in women are lower in all branches of study, reaching statistical significance in the field of Science, Social Sciences and Law, and Engineering and Architecture. However, analysed the variability of self-esteem by the branch of study within each gender, the results show independence in the case of men, whereas in the case of women find statistically significant differences, arising from lower self-esteem of Arts and Humanities students vs. the Social and legal Sciences students. These findings confirm the results of numerous investigations in which the levels of female self-esteem appears always below the male, suggesting that perhaps we should consider separately the two populations rather than continually emphasize the difference. The branch of study, for its part has not appeared as an explanatory factor of relevance, beyond detected the largest absolute difference between gender in the technical branch, one in which women are historically a minority, ergo, are no disciplinary or academic characteristics which would explain the differences, but the differentiated social context that occurs within it.

Keywords: study branch, gender, self-esteem, applied psychology

Procedia PDF Downloads 467
9189 A Regional Analysis on Co-movement of Sovereign Credit Risk and Interbank Risks

Authors: Mehdi Janbaz

Abstract:

The global financial crisis and the credit crunch that followed magnified the importance of credit risk management and its crucial role in the stability of all financial sectors and the whole of the system. Many believe that risks faced by the sovereign sector are highly interconnected with banking risks and most likely to trigger and reinforce each other. This study aims to examine (1) the impact of banking and interbank risk factors on the sovereign credit risk of Eurozone, and (2) how the EU Credit Default Swaps spreads dynamics are affected by the Crude Oil price fluctuations. The hypothesizes are tested by employing fitting risk measures and through a four-staged linear modeling approach. The sovereign senior 5-year Credit Default Swap spreads are used as a core measure of the credit risk. The monthly time-series data of the variables used in the study are gathered from the DataStream database for a period of 2008-2019. First, a linear model test the impact of regional macroeconomic and market-based factors (STOXX, VSTOXX, Oil, Sovereign Debt, and Slope) on the CDS spreads dynamics. Second, the bank-specific factors, including LIBOR-OIS spread (the difference between the Euro 3-month LIBOR rate and Euro 3-month overnight index swap rates) and Euribor, are added to the most significant factors of the previous model. Third, the global financial factors including EURO to USD Foreign Exchange Volatility, TED spread (the difference between 3-month T-bill and the 3-month LIBOR rate based in US dollars), and Chicago Board Options Exchange (CBOE) Crude Oil Volatility Index are added to the major significant factors of the first two models. Finally, a model is generated by a combination of the major factor of each variable set in addition to the crisis dummy. The findings show that (1) the explanatory power of LIBOR-OIS on the sovereign CDS spread of Eurozone is very significant, and (2) there is a meaningful adverse co-movement between the Crude Oil price and CDS price of Eurozone. Surprisingly, adding TED spread (the difference between the three-month Treasury bill and the three-month LIBOR based in US dollars.) to the analysis and beside the LIBOR-OIS spread (the difference between the Euro 3M LIBOR and Euro 3M OIS) in third and fourth models has been increased the predicting power of LIBOR-OIS. Based on the results, LIBOR-OIS, Stoxx, TED spread, Slope, Oil price, OVX, FX volatility, and Euribor are the determinants of CDS spreads dynamics in Eurozone. Moreover, the positive impact of the crisis period on the creditworthiness of the Eurozone is meaningful.

Keywords: CDS, crude oil, interbank risk, LIBOR-OIS, OVX, sovereign credit risk, TED

Procedia PDF Downloads 149